WorldWideScience

Sample records for video images recorded

  1. OPTIMISATION OF OCCUPATIONAL RADIATION PROTECTION IN IMAGE-GUIDED INTERVENTIONS: EXPLORING VIDEO RECORDINGS AS A TOOL IN THE PROCESS.

    Science.gov (United States)

    Almén, Anja; Sandblom, Viktor; Rystedt, Hans; von Wrangel, Alexa; Ivarsson, Jonas; Båth, Magnus; Lundh, Charlotta

    2016-06-01

    The overall purpose of this work was to explore how video recordings can contribute to the process of optimising occupational radiation protection in image-guided interventions. Video-recorded material from two image-guided interventions was produced and used to investigate to what extent it is conceivable to observe and assess dose-affecting actions in video recordings. Using the recorded material, it was to some extent possible to connect the choice of imaging techniques to the medical events during the procedure and, to a less extent, to connect these technical and medical issues to the occupational exposure. It was possible to identify a relationship between occupational exposure level to staff and positioning and use of shielding. However, detailed values of the dose rates were not possible to observe on the recordings, and the change in occupational exposure level from adjustments of exposure settings was not possible to identify. In conclusion, the use of video recordings is a promising tool to identify dose-affecting instances, allowing for a deeper knowledge of the interdependency between the management of the medical procedure, the applied imaging technology and the occupational exposure level. However, for a full information about the dose-affecting actions, the equipment used and the recording settings have to be thoroughly planned. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  3. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  4. Scan converting video tape recorder

    Science.gov (United States)

    Holt, N. I. (Inventor)

    1971-01-01

    A video tape recorder is disclosed of sufficient bandwidth to record monochrome television signals or standard NTSC field sequential color at current European and American standards. The system includes scan conversion means for instantaneous playback at scanning standards different from those at which the recording is being made.

  5. Video recording in movement disorders: practical issues.

    Science.gov (United States)

    Duker, Andrew P

    2013-10-01

    Video recording can provide a valuable and unique record of the physical examinations of patients with a movement disorder, capturing nuances of movement and supplementing the written medical record. In addition, video is an indispensable tool for education and research in movement disorders. Digital file recording and storage has largely replaced analog tape recording, increasing the ease of editing and storing video records. Practical issues to consider include hardware and software configurations, video format, the security and longevity of file storage, patient consent, and video protocols.

  6. Color image and video enhancement

    CERN Document Server

    Lecca, Michela; Smolka, Bogdan

    2015-01-01

    This text covers state-of-the-art color image and video enhancement techniques. The book examines the multivariate nature of color image/video data as it pertains to contrast enhancement, color correction (equalization, harmonization, normalization, balancing, constancy, etc.), noise removal and smoothing. This book also discusses color and contrast enhancement in vision sensors and applications of image and video enhancement.   ·         Focuses on enhancement of color images/video ·         Addresses algorithms for enhancing color images and video ·         Presents coverage on super resolution, restoration, in painting, and colorization.

  7. Multiple Generations on Video Tape Recorders.

    Science.gov (United States)

    Wiens, Jacob H.

    Helical scan video tape recorders were tested for their dubbing characteristics in order to make selection data available to media personnel. The equipment, two recorders of each type tested, was submitted by the manufacturers. The test was designed to produce quality evaluations for three generations of a single tape, thereby encompassing all…

  8. Detectors for scanning video imagers

    Science.gov (United States)

    Webb, Robert H.; Hughes, George W.

    1993-11-01

    In scanning video imagers, a single detector sees each pixel for only 100 ns, so the bandwidth of the detector needs to be about 10 MHz. How this fact influences the choice of detectors for scanning systems is described here. Some important parametric quantities obtained from manufacturer specifications are related and it is shown how to compare detectors when specified quantities differ.

  9. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  10. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  11. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  12. Implications of the law on video recording in clinical practice

    NARCIS (Netherlands)

    Henken, K.R.; Jansen, F.W.; Klein, J.; Stassen, L.P.S.; Dankelman, J.; Van den Dobbelsteen, J.J.

    2012-01-01

    Background Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear

  13. Enhanced Video Surveillance (EVS) with speckle imaging

    Energy Technology Data Exchange (ETDEWEB)

    Carrano, C J

    2004-01-13

    Enhanced Video Surveillance (EVS) with Speckle Imaging is a high-resolution imaging system that substantially improves resolution and contrast in images acquired over long distances. This technology will increase image resolution up to an order of magnitude or greater for video surveillance systems. The system's hardware components are all commercially available and consist of a telescope or large-aperture lens assembly, a high-performance digital camera, and a personal computer. The system's software, developed at LLNL, extends standard speckle-image-processing methods (used in the astronomical community) to solve the atmospheric blurring problem associated with imaging over medium to long distances (hundreds of meters to tens of kilometers) through horizontal or slant-path turbulence. This novel imaging technology will not only enhance national security but also will benefit law enforcement, security contractors, and any private or public entity that uses video surveillance to protect their assets.

  14. An innovative technique for recording picture-in-picture ultrasound videos.

    Science.gov (United States)

    Rajasekaran, Sathish; Finnoff, Jonathan T

    2013-08-01

    Many ultrasound educational products and ultrasound researchers present diagnostic and interventional ultrasound information using picture-in-picture videos, which simultaneously show the ultrasound image and transducer and patient positions. Traditional techniques for creating picture-in-picture videos are expensive, nonportable, or time-consuming. This article describes an inexpensive, simple, and portable way of creating picture-in-picture ultrasound videos. This technique uses a laptop computer with a video capture device to acquire the ultrasound feed. Simultaneously, a webcam captures a live video feed of the transducer and patient position and live audio. Both sources are streamed onto the computer screen and recorded by screen capture software. This technique makes the process of recording picture-in-picture ultrasound videos more accessible for ultrasound educators and researchers for use in their presentations or publications.

  15. Educational Video Recording and Editing for The Hand Surgeon

    OpenAIRE

    Rehim, Shady A.; Chung, Kevin C.

    2015-01-01

    Digital video recordings are increasingly used across various medical and surgical disciplines including hand surgery for documentation of patient care, resident education, scientific presentations and publications. In recent years, the introduction of sophisticated computer hardware and software technology has simplified the process of digital video production and improved means of disseminating large digital data files. However, the creation of high quality surgical video footage requires b...

  16. Clients experience of video recordings of their psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Jensen, Karen Boelt; Madsen, Ninna Skov

    the current relatively widespread use video one finds only a very limited numbers empirical study of how these recordings is experienced by the clients. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents a qualitative, explorative study of clients’ experiences......Background: Due to the development of technologies and the low costs video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  17. Video recording of ophthalmic surgery--ethical and legal considerations.

    Science.gov (United States)

    Turnbull, Andrew M J; Emsley, Elizabeth S

    2014-01-01

    Video documenting is increasingly used in ophthalmic training and research, with many ophthalmologists routinely recording their surgical cases. Although this modality represents an excellent means of improving technique and advancing knowledge, there are major ethical and legal considerations with its use. Informed consent to record is required in most situations. Patients should be advised of any risk of identification and the purpose of the recording. Systems should be in place to deal with issues such as data storage, withdrawal of consent, and patients requesting copies of their recording. Privacy and security of neither patients nor health care professionals should be compromised. Ownership and distribution of video recordings, the potential for their use in medical litigation, the ethics and legality of editing and the impact on surgeon performance are other factors to consider. Although video recording of ophthalmic surgery is useful and technically simple to accomplish, patient safety and welfare must always remain paramount. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Still image and video compression with MATLAB

    CERN Document Server

    Thyagarajan, K

    2010-01-01

    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  19. Guided filtering for solar image/video processing

    Science.gov (United States)

    Xu, Long; Yan, Yihua; Cheng, Jun

    2017-06-01

    A new image enhancement algorithm employing guided filtering is proposed in this work for enhancement of solar images and videos, so that users can easily figure out important fine structures imbedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determi-nation of interesting solar burst activities from recorded images/movies.

  20. Educational video recording and editing for the hand surgeon.

    Science.gov (United States)

    Rehim, Shady A; Chung, Kevin C

    2015-05-01

    Digital video recordings are increasingly used across various medical and surgical disciplines including hand surgery for documentation of patient care, resident education, scientific presentations, and publications. In recent years, the introduction of sophisticated computer hardware and software technology has simplified the process of digital video production and improved means of disseminating large digital data files. However, the creation of high-quality surgical video footage requires a basic understanding of key technical considerations, together with creativity and sound aesthetic judgment of the videographer. In this article we outline the practical steps involved in equipment preparation, video recording, editing, and archiving, as well as guidance for the choice of suitable hardware and software equipment. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  1. [Video Instruction for Synchronous Video Recording of Mimic Movement of Patients with Facial Palsy].

    Science.gov (United States)

    Schaede, Rebecca Anna; Volk, Gerd Fabian; Modersohn, Luise; Barth, Jodi Maron; Denzler, Joachim; Guntinas-Lichius, Orlando

    2017-12-01

    Background Photografy and video are necessary to record the severity of a facial palsy or to allow offline grading with a grading system. There is no international standard for the video recording urgently needed to allow a standardized comparison of different patient cohorts. Methods A video instruction was developed. The instruction was shown to the patient and presents several mimic movements. At the same time the patient is recorded while repeating the presented movement using commercial hardware. Facial movements were selected in such a way that it was afterwards possible to evaluate the recordings with standard grading systems (House-Brackmann, Sunnybrook, Stennert, Yanagihara) or even with (semi)automatic software. For quality control, the patients evaluated the instruction using a questionnaire. Results The video instruction takes 11 min and 05 and is divided in 3 parts: 1) Explanation of the procedure; 2) Foreplay and recreating of the facial movements; 3) Repeating of sentences to analyze the communication skills. So far 13 healthy subjects and 10 patients with acute or chronic facial palsy were recorded. All recordings could be assessed by the above mentioned grading systems. The instruction was rated as well explaining and easy to follow by healthy persons and patients. Discussion There is now a video instruction available for standardized recording of facial movement. This instruction is recommended for use in clinical routine and in clinical trials. This will allow a standardized comparison of patients within Germany and international patient cohorts. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Content-based image and video compression

    Science.gov (United States)

    Du, Xun; Li, Honglin; Ahalt, Stanley C.

    2002-08-01

    The term Content-Based appears often in applications for which MPEG-7 is expected to play a significant role. MPEG-7 standardizes descriptors of multimedia content, and while compression is not the primary focus of MPEG-7, the descriptors defined by MPEG-7 can be used to reconstruct a rough representation of an original multimedia source. In contrast, current image and video compression standards such as JPEG and MPEG are not designed to encode at the very low bit-rates that could be accomplished with MPEG-7 using descriptors. In this paper we show that content-based mechanisms can be introduced into compression algorithms to improve the scalability and functionality of current compression methods such as JPEG and MPEG. This is the fundamental idea behind Content-Based Compression (CBC). Our definition of CBC is a compression method that effectively encodes a sufficient description of the content of an image or a video in order to ensure that the recipient is able to reconstruct the image or video to some degree of accuracy. The degree of accuracy can be, for example, the classification error rate of the encoded objects, since in MPEG-7 the classification error rate measures the performance of the content descriptors. We argue that the major difference between a content-based compression algorithm and conventional block-based or object-based compression algorithms is that content-based compression replaces the quantizer with a more sophisticated classifier, or with a quantizer which minimizes classification error. Compared to conventional image and video compression methods such as JPEG and MPEG, our results show that content-based compression is able to achieve more efficient image and video coding by suppressing the background while leaving the objects of interest nearly intact.

  3. Recorded peer video chat as a research and development tool

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Cowie, Bronwen

    2016-01-01

    When practising teachers take time to exchange their experiences and reflect on their teaching realities as critical friends, they add meaning and depth to educational research. When peer talk is facilitated through video chat platforms, teachers can meet (virtually) face to face even when...... recordings were transcribed and used to prompt further discussion. The recording of the video chat meetings provided an opportunity for researchers to listen in and follow up on points they felt needed further unpacking or clarification. The recorded peer video chat conversations provided an additional...... opportunity to stimulate and support teacher participants in a process of critical analysis and reflection on practice. The discussions themselves were empowering because in the absence of the researcher, the teachers, in negotiation with peers, choose what is important enough to them to take time to discuss....

  4. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  5. Structural image and video understanding

    NARCIS (Netherlands)

    Lou, Z.

    2016-01-01

    In this thesis, we have discussed how to exploit the structures in several computer vision topics. The five chapters addressed five computer vision topics using the image structures. In chapter 2, we proposed a structural model to jointly predict the age, expression and gender of a face. By modeling

  6. Image and Video for Hearing Impaired People

    Directory of Open Access Journals (Sweden)

    Thomas Burger

    2008-04-01

    Full Text Available We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL and the cued speech (CS language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.

  7. Image and Video for Hearing Impaired People

    Directory of Open Access Journals (Sweden)

    Aran Oya

    2007-01-01

    Full Text Available We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL and the cued speech (CS language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.

  8. GPM GROUND VALIDATION PRECIPITATION VIDEO IMAGER (PVI) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Precipitation Video Imager (PVI) GCPEx dataset collected precipitation particle images and drop size distribution data from November 2011...

  9. Video News release: LHC Energy Record

    CERN Multimedia

    CERN video productions

    2009-01-01

    Geneva, 30 November 2009. CERN1’s Large Hadron Collider has today become the world’s highest energy particle accelerator, having accelerated its twin beams of protons to an energy of 1.18 TeV in the early hours of the morning. This exceeds the previous world record of 0.98 TeV, which had been held by the US Fermi National Accelerator Laboratory’s Tevatron collider since 2001. It marks another important milestone on the road to first physics at the LHC in 2010. “We are still coming to terms with just how smoothly the LHC commissioning is going,” said CERN Director General Rolf Heuer. “It is fantastic. However, we are continuing to take it step by step, and there is still a lot to do before we start physics in 2010. I’m keeping my champagne on ice until then.” These developments come just 10 days after the LHC restart, demonstrating the excellent performance of the machine. First beams were injected into the LHC on Friday 20 November. Over the following days, the machine’s operators circulated...

  10. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  11. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  12. Video Recording in Ethnographic SLA Research: Some Issues of Validity in Data Collection.

    Science.gov (United States)

    DuFon, Margaret A.

    2002-01-01

    Reviews visual anthropology, educational anthropology, and ethnographic filmmaking literature on questions concerning collection of valid video recorded data in the second language context. Examines h an interaction should be video recorded, who should be video recorded, and who should do the recording. Examples illustrate the kinds of research…

  13. Objective Video Quality Assessment of Direct Recording and Datavideo HDR-40 Recording System

    Directory of Open Access Journals (Sweden)

    Nofia Andreana

    2016-10-01

    Full Text Available Digital Video Recorder (DVR is a digital video recorder with hard drive storage media. When the capacity of the hard disk runs out. It will provide information to users and if there is no response, it will be overwritten automatically and the data will be lost. The main focus of this paper is to enable recording directly connected to a computer editor. The output of both systems (DVR and Direct Recording will be compared with an objective assessment using the Mean Square Error (MSE and Peak Signal to Noise Ratio (PSNR parameter. The results showed that the average value of MSE Direct Recording dB 797.8556108, 137.4346100 DVR MSE dB and the average value of PSNR Direct Recording and DVR PSNR dB 19.5942333 27.0914258 dB. This indicates that the DVR has a much better output quality than Direct Recording.

  14. Hardware implementation of machine vision systems: image and video processing

    Science.gov (United States)

    Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe

    2013-12-01

    This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

  15. Dynamic Image Stitching for Panoramic Video

    Directory of Open Access Journals (Sweden)

    Jen-Yu Shieh

    2014-10-01

    Full Text Available The design of this paper is based on the Dynamic image titching for panoramic video. By utilizing OpenCV visual function data library and SIFT algorithm as the basis for presentation, this article brings forward Gaussian second differenced MoG which is processed basing on DoG Gaussian Difference Map to reduce order in synthesizing dynamic images and simplify the algorithm of the Gaussian pyramid structure. MSIFT matches with overlapping segmentation method to simplify the scope of feature extraction in order to enhance speed. And through this method traditional image synthesis can be improved without having to take lots of time in calculation and being limited by space and angle. This research uses four normal Webcams and two IPCAM coupled with several-wide angle lenses. By using wide-angle lenses to monitor over a wide range of an area and then by using image stitching panoramic effect is achieved. In terms of overall image application and control interface, Microsoft Visual Studio C# is adopted to a construct software interface. On a personal computer with 2.4-GHz CPU and 2-GB RAM and with the cameras fixed to it, the execution speed is three images per second, which reduces calculation time of the traditional algorithm.

  16. Does Instructor's Image Size in Video Lectures Affect Learning Outcomes?

    Science.gov (United States)

    Pi, Z.; Hong, J.; Yang, J.

    2017-01-01

    One of the most commonly used forms of video lectures is a combination of an instructor's image and accompanying lecture slides as a picture-in-picture. As the image size of the instructor varies significantly across video lectures, and so do the learning outcomes associated with this technology, the influence of the instructor's image size should…

  17. Hardware adaptation layer for MPEG video recording on a helical scan-based digital data recorder

    Science.gov (United States)

    de Ridder, Ad C.; Kindt, S.; Frimout, Emmanuel D.; Biemond, Jan; Lagendijk, Reginald L.

    1996-03-01

    The forthcoming introduction of helical scan digital data tape recorders with high access bandwidth and large capacity will facilitate the recording and retrieval of a wide variety of multimedia information from different sources, such as computer data and digital audio and video. For the compression of digital audio and video, the MPEG standard has internationally been accepted. Although helical scan tape recorders can store and playback MPEG compressed signals transparently they are not well suited for carrying out special playback modes, in particular fast forward and fast reverse. Only random portions of a original MPEG bitstream are recovered on fast playback. Unfortunately these shreds of information cannot be interpreted by a standard MPEG decoder, due to loss of synchronization and missing reference pictures. In the EC-sponsored RACE project DART (Digital Data Recorder Terminal) the possibilities for recording and fast playback of MPEG video on a helical scan recorder have been investigated. In the approach we present in this paper, we assume that not transcoding is carried out on the incoming bitstream at recording time, nor that any additional information is recorded. To use the shreds of information for the reconstruction of interpretable pictures, a bitstream validator has been developed to achieve conformance to the MPEG-2 syntax during fast playback. The concept has been validated by realizing hardware demonstrators that connect to a prototype helical scan digital data tape recorder.

  18. Multimodal interaction in image and video applications

    CERN Document Server

    Sappa, Angel D

    2013-01-01

    Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...

  19. Video Vortex reader II: moving images beyond YouTube

    NARCIS (Netherlands)

    Lovink, G.; Somers Miles, R.

    2011-01-01

    Video Vortex Reader II is the Institute of Network Cultures' second collection of texts that critically explore the rapidly changing landscape of online video and its use. With the success of YouTube ('2 billion views per day') and the rise of other online video sharing platforms, the moving image

  20. Image and video compression fundamentals, techniques, and applications

    CERN Document Server

    Joshi, Madhuri A; Dandawate, Yogesh H; Joshi, Kalyani R; Metkar, Shilpa P

    2014-01-01

    Image and video signals require large transmission bandwidth and storage, leading to high costs. The data must be compressed without a loss or with a small loss of quality. Thus, efficient image and video compression algorithms play a significant role in the storage and transmission of data.Image and Video Compression: Fundamentals, Techniques, and Applications explains the major techniques for image and video compression and demonstrates their practical implementation using MATLAB® programs. Designed for students, researchers, and practicing engineers, the book presents both basic principles

  1. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  2. Audiovisual presentation of video-recorded stimuli at a high frame rate

    National Research Council Canada - National Science Library

    Lidestam, Björn

    2014-01-01

    .... Methods for accomplishing this include recording audio and video separately using an exact synchronization signal, editing the recordings and finding exact synchronization points, and presenting...

  3. Semi-Supervised Image-to-Video Adaptation for Video Action Recognition.

    Science.gov (United States)

    Zhang, Jianguang; Han, Yahong; Tang, Jinhui; Hu, Qinghua; Jiang, Jianmin

    2017-04-01

    Human action recognition has been well explored in applications of computer vision. Many successful action recognition methods have shown that action knowledge can be effectively learned from motion videos or still images. For the same action, the appropriate action knowledge learned from different types of media, e.g., videos or images, may be related. However, less effort has been made to improve the performance of action recognition in videos by adapting the action knowledge conveyed from images to videos. Most of the existing video action recognition methods suffer from the problem of lacking sufficient labeled training videos. In such cases, over-fitting would be a potential problem and the performance of action recognition is restrained. In this paper, we propose an adaptation method to enhance action recognition in videos by adapting knowledge from images. The adapted knowledge is utilized to learn the correlated action semantics by exploring the common components of both labeled videos and images. Meanwhile, we extend the adaptation method to a semi-supervised framework which can leverage both labeled and unlabeled videos. Thus, the over-fitting can be alleviated and the performance of action recognition is improved. Experiments on public benchmark datasets and real-world datasets show that our method outperforms several other state-of-the-art action recognition methods.

  4. Cell Phone Video Recording Feature as a Language Learning Tool: A Case Study

    Science.gov (United States)

    Gromik, Nicolas A.

    2012-01-01

    This paper reports on a case study conducted at a Japanese national university. Nine participants used the video recording feature on their cell phones to produce weekly video productions. The task required that participants produce one 30-second video on a teacher-selected topic. Observations revealed the process of video creation with a cell…

  5. GPM GROUND VALIDATION PRECIPITATION VIDEO IMAGER (PVI) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Precipitation Video Imager (PVI) collected precipitation particle images and drop size distribution data during November 2011 through March 2012 as part of the...

  6. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  7. Image and video search engine for the World Wide Web

    Science.gov (United States)

    Smith, John R.; Chang, Shih-Fu

    1997-01-01

    We describe a visual information system prototype for searching for images and videos on the World-Wide Web. New visual information in the form of images, graphics, animations and videos is being published on the Web at an incredible rate. However, cataloging this visual data is beyond the capabilities of current text-based Web search engines. In this paper, we describe a complete system by which visual information on the Web is (1) collected by automated agents, (2) processed in both text and visual feature domains, (3) catalogued and (4) indexed for fast search and retrieval. We introduce an image and video search engine which utilizes both text-based navigation and content-based technology for searching visually through the catalogued images and videos. Finally, we provide an initial evaluation based upon the cataloging of over one half million images and videos collected from the Web.

  8. Eye-Movement Tracking Using Compressed Video Images

    Science.gov (United States)

    Mulligan, Jeffrey B.; Beutter, Brent R.; Hull, Cynthia H. (Technical Monitor)

    1994-01-01

    Infrared video cameras offer a simple noninvasive way to measure the position of the eyes using relatively inexpensive equipment. Several commercial systems are available which use special hardware to localize features in the image in real time, but the constraint of realtime performance limits the complexity of the applicable algorithms. In order to get better resolution and accuracy, we have used off-line processing to apply more sophisticated algorithms to the images. In this case, a major technical challenge is the real-time acquisition and storage of the video images. This has been solved using a strictly digital approach, exploiting the burgeoning field of hardware video compression. In this paper we describe the algorithms we have developed for tracking the movements of the eyes in video images, and present experimental results showing how the accuracy is affected by the degree of video compression.

  9. Uncompressed video image transmission of laparoscopic or endoscopic surgery for telemedicine.

    Science.gov (United States)

    Huang, Ke-Jian; Qiu, Zheng-Jun; Fu, Chun-Yu; Shimizu, Shuji; Okamura, Koji

    2008-06-01

    Traditional narrowband telemedicine cannot provide quality dynamic images. We conducted videoconferences of laparoscopic and endoscopic operations via an uncompressed video transmission technique. A superfast broadband Internet link was set up between Shanghai in the People's Republic of China and Fukuoka in Japan. Uncompressed dynamic video images of laparoscopic and endoscopic operations were transmitted by a digital video transfer system (DVTS). Seven teleconferences were conducted between June 2005 and June 2007. Of the 7 teleconferences, 5 were live surgical demonstrations and 3 were recorded video teleconsultations. Smoothness of the motion picture, sharpness of images, and clarity of sound were benefited by this form of telemedicine based upon DVTS. Telemedicine based upon DVTS is a superior choice for laparoscopic and endoscopic skill training across the borders.

  10. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    Science.gov (United States)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  11. Spatio-temporal image inpainting for video applications

    Directory of Open Access Journals (Sweden)

    Voronin Viacheslav

    2017-01-01

    Full Text Available Video inpainting or completion is a vital video improvement technique used to repair or edit digital videos. This paper describes a framework for temporally consistent video completion. The proposed method allows to remove dynamic objects or restore missing or tainted regions presented in a video sequence by utilizing spatial and temporal information from neighboring scenes. Masking algorithm is used for detection of scratches or damaged portions in video frames. The algorithm iteratively performs the following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove by using a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Experimental comparisons to state-of-the-art video completion methods demonstrate the effectiveness of the proposed approach. It is shown that the proposed spatio-temporal image inpainting method allows restoring a missing blocks and removing a text from the scenes on videos.

  12. Video-EEG recording: a four-year clinical audit.

    LENUS (Irish Health Repository)

    O'Rourke, K

    2012-02-03

    In the setting of a regional neurological unit without an epilepsy surgery service as in our case, video-EEG telemetry is undertaken for three main reasons; to investigate whether frequent paroxysmal events represent seizures when there is clinical doubt, to attempt anatomical localization of partial seizures when standard EEG is unhelpful, and to attempt to confirm that seizures are non-epileptic when this is suspected. A clinical audit of all telemetry performed over a four-year period was carried out, in order to determine the clinical utility of this aspect of the service and to determine means of improving effectiveness in the unit. Analysis of the data showed a high rate of negative studies with no attacks recorded. Of the positive studies approximately 50% showed non-epileptic attacks. Strategies for improving the rate of positive investigations are discussed.

  13. EEG in the classroom: Synchronised neural recordings during video presentation

    Science.gov (United States)

    Poulsen, Andreas Trier; Kamronn, Simon; Dmochowski, Jacek; Parra, Lucas C.; Hansen, Lars Kai

    2017-03-01

    We performed simultaneous recordings of electroencephalography (EEG) from multiple students in a classroom, and measured the inter-subject correlation (ISC) of activity evoked by a common video stimulus. The neural reliability, as quantified by ISC, has been linked to engagement and attentional modulation in earlier studies that used high-grade equipment in laboratory settings. Here we reproduce many of the results from these studies using portable low-cost equipment, focusing on the robustness of using ISC for subjects experiencing naturalistic stimuli. The present data shows that stimulus-evoked neural responses, known to be modulated by attention, can be tracked for groups of students with synchronized EEG acquisition. This is a step towards real-time inference of engagement in the classroom.

  14. Measuring coupled oscillations using an automated video analysis technique based on image recognition

    Energy Technology Data Exchange (ETDEWEB)

    Monsoriu, Juan A; Gimenez, Marcos H; Riera, Jaime; Vidaurre, Ana [Departamento de Fisica Aplicada, Universidad Politecnica de Valencia, E-46022 Valencia (Spain)

    2005-11-01

    The applications of the digital video image to the investigation of physical phenomena have increased enormously in recent years. The advances in computer technology and image recognition techniques allow the analysis of more complex problems. In this work, we study the movement of a damped coupled oscillation system. The motion is considered as a linear combination of two normal modes, i.e. the symmetric and antisymmetric modes. The image of the experiment is recorded with a video camera and analysed by means of software developed in our laboratory. The results show a very good agreement with the theory.

  15. Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings

    Science.gov (United States)

    Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas

    2011-01-01

    Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.

  16. Research on defogging technology of video image based on FPGA

    Science.gov (United States)

    Liu, Shuo; Piao, Yan

    2015-03-01

    As the effect of atmospheric particles scattering, the video image captured by outdoor surveillance system has low contrast and brightness, which directly affects the application value of the system. The traditional defogging technology is mostly studied by software for the defogging algorithms of the single frame image. Moreover, the algorithms have large computation and high time complexity. Then, the defogging technology of video image based on Digital Signal Processing (DSP) has the problem of complex peripheral circuit. It can't be realized in real-time processing, and it's hard to debug and upgrade. In this paper, with the improved dark channel prior algorithm, we propose a kind of defogging technology of video image based on Field Programmable Gate Array (FPGA). Compared to the traditional defogging methods, the video image with high resolution can be processed in real-time. Furthermore, the function modules of the system have been designed by hardware description language. At last, the results show that the defogging system based on FPGA can process the video image with minimum resolution of 640×480 in real-time. After defogging, the brightness and contrast of video image are improved effectively. Therefore, the defogging technology proposed in the paper has a great variety of applications including aviation, forest fire prevention, national security and other important surveillance.

  17. Mobile, portable lightweight wireless video recording solutions for homeland security, defense, and law enforcement applications

    Science.gov (United States)

    Sandy, Matt; Goldburt, Tim; Carapezza, Edward M.

    2015-05-01

    It is desirable for executive officers of law enforcement agencies and other executive officers in homeland security and defense, as well as first responders, to have some basic information about the latest trend on mobile, portable lightweight wireless video recording solutions available on the market. This paper reviews and discusses a number of studies on the use and effectiveness of wireless video recording solutions. It provides insights into the features of wearable video recording devices that offer excellent applications for the category of security agencies listed in this paper. It also provides answers to key questions such as: how to determine the type of video recording solutions most suitable for the needs of your agency, the essential features to look for when selecting a device for your video needs, and the privacy issues involved with wearable video recording devices.

  18. A professional and cost effective digital video editing and image storage system for the operating room.

    Science.gov (United States)

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  19. [Video recording system of endoscopic procedures for digital forensics].

    Science.gov (United States)

    Endo, Chiaki; Sakurada, A; Kondo, T

    2009-07-01

    Recently, endoscopic procedures including surgery, intervention, and examination have been widely performed. Medical practitioners are required to record the procedures precisely in order to check the procedures retrospectively and to get the legally reliable record. Medical Forensic System made by KS Olympus Japan offers 2 kinds of movie and patient's data, such as heart rate, blood pressure, and Spo, which are simultaneously recorded. We installed this system into the bronchoscopy room and have experienced its benefit. Under this system, we can get bronchoscopic image, bronchoscopy room view, and patient's data simultaneously. We can check the quality of the bronchoscopic procedures retrospectively, which is useful for bronchoscopy staff training. Medical Forensic System should be installed in any kind of endoscopic procedures.

  20. Does Wearable Medical Technology With Video Recording Capability Add Value to On-Call Surgical Evaluations?

    Science.gov (United States)

    Gupta, Sameer; Boehme, Jacqueline; Manser, Kelly; Dewar, Jannine; Miller, Amie; Siddiqui, Gina; Schwaitzberg, Steven D

    2016-10-01

    Background Google Glass has been used in a variety of medical settings with promising results. We explored the use and potential value of an asynchronous, near-real time protocol-which avoids transmission issues associated with real-time applications-for recording, uploading, and viewing of high-definition (HD) visual media in the emergency department (ED) to facilitate remote surgical consults. Study Design First-responder physician assistants captured pertinent aspects of the physical examination and diagnostic imaging using Google Glass' HD video or high-resolution photographs. This visual media were then securely uploaded to the study website. The surgical consultation then proceeded over the phone in the usual fashion and a clinical decision was made. The surgeon then accessed the study website to review the uploaded video. This was followed by a questionnaire regarding how the additional data impacted the consultation. Results The management plan changed in 24% (11) of cases after surgeons viewed the video. Five of these plans involved decision making regarding operative intervention. Although surgeons were generally confident in their initial management plan, confidence scores increased further in 44% (20) of cases. In addition, we surveyed 276 ED patients on their opinions regarding concerning the practice of health care providers wearing and using recording devices in the ED. The survey results revealed that the majority of patients are amenable to the addition of wearable technology with video functionality to their care. Conclusions This study demonstrates the potential value of a medically dedicated, hands-free, HD recording device with internet connectivity in facilitating remote surgical consultation. © The Author(s) 2016.

  1. VIPER: a general-purpose digital image-processing system applied to video microscopy.

    Science.gov (United States)

    Brunner, M; Ittner, W

    1988-01-01

    This paper describes VIPER, the video image-processing system Erlangen. It consists of a general purpose microcomputer, commercially available image-processing hardware modules connected directly to the computer, video input/output-modules such as a TV camera, video recorders and monitors, and a software package. The modular structure and the capabilities of this system are explained. The software is user-friendly, menu-driven and performs image acquisition, transfers, greyscale processing, arithmetics, logical operations, filtering display, colour assignment, graphics, and a couple of management functions. More than 100 image-processing functions are implemented. They are available either by typing a key or by a simple call to the function-subroutine library in application programs. Examples are supplied in the area of biomedical research, e.g. in in-vivo microscopy.

  2. Video event data recording of a taxi driver used for diagnosis of epilepsy.

    Science.gov (United States)

    Sakurai, Kotaro; Yamamoto, Junko; Kurita, Tsugiko; Takeda, Youji; Kusumi, Ichiro

    2014-01-01

    A video event data recorder (VEDR) in a motor vehicle records images before and after a traffic accident. This report describes a taxi driver whose seizures were recorded by VEDR, which was extremely useful for the diagnosis of epilepsy. The patient was a 63-year-old right-handed Japanese male taxi driver. He collided with a streetlight. Two years prior to this incident, he raced an engine for a long time while parked. The VEDR enabled confirmation that the accidents depended on an epileptic seizure and he was diagnosed with symptomatic localization-related epilepsy. The VEDR is useful not only for traffic accident evidence; it might also contribute to a driver's health care and road safety.

  3. Video event data recording of a taxi driver used for diagnosis of epilepsy☆

    Science.gov (United States)

    Sakurai, Kotaro; Yamamoto, Junko; Kurita, Tsugiko; Takeda, Youji; Kusumi, Ichiro

    2014-01-01

    A video event data recorder (VEDR) in a motor vehicle records images before and after a traffic accident. This report describes a taxi driver whose seizures were recorded by VEDR, which was extremely useful for the diagnosis of epilepsy. The patient was a 63-year-old right-handed Japanese male taxi driver. He collided with a streetlight. Two years prior to this incident, he raced an engine for a long time while parked. The VEDR enabled confirmation that the accidents depended on an epileptic seizure and he was diagnosed with symptomatic localization-related epilepsy. The VEDR is useful not only for traffic accident evidence; it might also contribute to a driver's health care and road safety. PMID:25667862

  4. Localizing wushu players on a platform based on a video recording

    Science.gov (United States)

    Peczek, Piotr M.; Zabołotny, Wojciech M.

    2017-08-01

    This article describes the development of a method to localize an athlete during sports performance on a platform, based on a static video recording. Considered sport for this method is wushu - martial art. However, any other discipline can be applied. There are specified requirements, and 2 algorithms of image processing are described. The next part presents an experiment that was held based on recordings from the Pan American Wushu Championship. Based on those recordings the steps of the algorithm are shown. Results are evaluated manually. The last part of the article concludes if the algorithm is applicable and what improvements have to be implemented to use it during sports competitions as well as for offline analysis.

  5. Video event data recording of a taxi driver used for diagnosis of epilepsy

    Directory of Open Access Journals (Sweden)

    Kotaro Sakurai

    2014-01-01

    Full Text Available A video event data recorder (VEDR in a motor vehicle records images before and after a traffic accident. This report describes a taxi driver whose seizures were recorded by VEDR, which was extremely useful for the diagnosis of epilepsy. The patient was a 63-year-old right-handed Japanese male taxi driver. He collided with a streetlight. Two years prior to this incident, he raced an engine for a long time while parked. The VEDR enabled confirmation that the accidents depended on an epileptic seizure and he was diagnosed with symptomatic localization-related epilepsy. The VEDR is useful not only for traffic accident evidence; it might also contribute to a driver's health care and road safety.

  6. PIZZARO: Forensic analysis and restoration of image and video data.

    Science.gov (United States)

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Method and apparatus for reading meters from a video image

    Science.gov (United States)

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  8. Acquiring a dataset of labeled video images showing discomfort in demented elderly.

    Science.gov (United States)

    Bonroy, Bert; Schiepers, Pieter; Leysens, Greet; Miljkovic, Dragana; Wils, Maartje; De Maesschalck, Lieven; Quanten, Stijn; Triau, Eric; Exadaktylos, Vasileios; Berckmans, Daniel; Vanrumste, Bart

    2009-05-01

    One of the effects of late-stage dementia is the loss of the ability to communicate verbally. Patients become unable to call for help if they feel uncomfortable. The first objective of this article was to record facial expressions of bedridden demented elderly. For this purpose, we developed a video acquisition system (ViAS) that records synchronized video coming from two cameras. Each camera delivers uncompressed color images of 1,024 x 768 pixels, up to 30 frames per second. It is the first time that such a system has been placed in a patient's room. The second objective was to simultaneously label these video recordings with respect to discomfort expressions of the patients. Therefore, we developed a Digital Discomfort Labeling Tool (DDLT). This tool provides an easy-to-use software representation on a tablet PC of validated "paper" discomfort scales. With ViAS and DDLT, 80 different datasets were obtained of about 15 minutes of recordings. Approximately 80% of the recorded datasets delivered the labeled video recordings. The remainder were not usable due to under- or overexposed images and due to the patients being out of view as the system was not properly replaced after care. In one of 6 observed patients, nurses recognized a higher discomfort level that would not have been observed without the DDLT.

  9. Examining the Effectiveness of Digital Video Recordings on Oral Performance of EFL Learners

    Science.gov (United States)

    Göktürk, Nazlinur

    2016-01-01

    This study reports the results of an action-based study conducted in an EFL class to examine whether digital video recordings would contribute to the enhancement of EFL learners' oral fluency skills. It also investigates the learners' perceptions of the use of digital video recordings in a speaking class. 10 Turkish EFL learners participated in…

  10. Video Recorded Feedback for Self Regulation of Prospective Music Teachers in Piano Lessons

    Science.gov (United States)

    Deniz, Jale

    2012-01-01

    The main purpose of the study is enabling the prospective teachers to make self-regulations by video recording their piano performances with their instructors and feedbacks of their instructors and detect the views of specific students concerning these video records. The research was carried out during 2008-2009 academic year in Marmara…

  11. Minding the Music: Neuroscience, Video Recording, and the Pianist

    Science.gov (United States)

    Schlosser, Milton

    2011-01-01

    Research in music education asserts that video review by performers facilitates self-directed learning and transforms performing. Yet, certain videos may be traumatic for musicians to view; those who perceive themselves as failing or experience performance-related failures are prone to feelings of distress and sadness that can negatively affect…

  12. Coding the Complexity of Activity in Video Recordings

    DEFF Research Database (Denmark)

    Harter, Christopher Daniel; Otrel-Cass, Kathrin

    2017-01-01

    This paper presents a theoretical approach to coding and analyzing video data on human interaction and activity, using principles found in cultural historical activity theory. The systematic classification or coding of information contained in video data on activity can be arduous and time consum...

  13. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  14. Compression of mixed video and graphics images for TV systems

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; de With, Peter H. N.

    1998-01-01

    The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.

  15. Analysis of simulated angiographic procedures: part 1--capture and presentation of audio and video recordings.

    Science.gov (United States)

    Duncan, James R; Glaiberman, Craig B

    2006-12-01

    To assess different methods of recording angiographic simulations and to determine how such recordings might be used for training and research. Two commercially available high-fidelity angiography simulations, the Mentice Vascular Interventional Simulation Trainer and the Simbionix AngioMentor, were used for data collection. Video and audio records of simulated procedures were created by different methods, including software-based screen capture, video splitters and converters, and external cameras. Recording parameters were varied, and the recordings were transferred to computer workstations for postprocessing and presentation. The information displayed on the simulators' computer screens could be captured by each method. Although screen-capture software provided the highest resolution, workflow considerations favored a hardware-based solution that duplicated the video signal and recorded the data stream(s) at lower resolutions. Additional video and audio recording devices were used to monitor the angiographer's actions during the simulated procedures. The multiple audio and video files were synchronized and composited with personal computers equipped with commercially available video editing software. Depending on the needs of the intended audience, the resulting files could be distributed and displayed at full or reduced resolutions. The capture, editing, presentation, and distribution of synchronized multichannel audio and video recordings holds great promise for angiography training and simulation research. To achieve this potential, technical challenges will need to be met, and content will need to be tailored to suit the needs of trainees and researchers.

  16. Seizure semiology inferred from clinical descriptions and from video recordings. How accurate are they?

    Science.gov (United States)

    Beniczky, Simona Alexandra; Fogarasi, András; Neufeld, Miri; Andersen, Noémi Becser; Wolf, Peter; van Emde Boas, Walter; Beniczky, Sándor

    2012-06-01

    To assess how accurate the interpretation of seizure semiology is when inferred from witnessed seizure descriptions and from video recordings, five epileptologists analyzed 41 seizures from 30 consecutive patients who had clinical episodes in the epilepsy monitoring unit. For each clinical episode, the consensus conclusions (at least 3 identical choices) based on the descriptions and, separately, of the video recordings were compared with the clinical conclusions at the end of the diagnostic work-up, including data from the video-EEG recordings (reference standard). Consensus conclusion was reached in significantly more cases based on the interpretation of video recordings (88%) than on the descriptions (66%), and the overall accuracy was higher for the video recordings (85%) than for the descriptions (54%). When consensus was reached, the concordance with the reference standard was substantial for the descriptions (k=0.67) and almost perfect for the video recordings (k=0.95). Video recordings significantly increase the accuracy of seizure interpretation. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Applying deep learning to classify pornographic images and videos

    OpenAIRE

    Moustafa, Mohamed

    2015-01-01

    It is no secret that pornographic material is now a one-click-away from everyone, including children and minors. General social media networks are striving to isolate adult images and videos from normal ones. Intelligent image analysis methods can help to automatically detect and isolate questionable images in media. Unfortunately, these methods require vast experience to design the classifier including one or more of the popular computer vision feature descriptors. We propose to build a clas...

  18. The influence of video recordings on beginning therapist’s learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    2010-01-01

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  19. The influence of video recordings on beginning therapists’ learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  20. Accomplished Teaching: Using Video Recorded Micro-Teaching Discourse to Build Candidate Teaching Competencies

    Science.gov (United States)

    Shaw, Denise

    2017-01-01

    Objectives of this articles are to present the findings of video recorded communication between teacher candidates and peers during simulated micro-teaching. The micro-teaching activity in its entirety combines conventional face-to-face interaction, video micro-teaching, peer and instructor feedback, alongside self-reflection to undergird the…

  1. Video-recorded accidents conflicts and road user behaviour: A step forward in traffic safety research

    NARCIS (Netherlands)

    Horst, A.R.A. van der

    2008-01-01

    TNO Human Factors conducted long-term video observations to collect data on the pre-crash phase of real accidents (what exactly happened just before the collision?). The video recordings of collisions were used to evaluate and validate the safety value of indepth accident analyses, road scene

  2. Video-recorded accidents conflicts and road user behaviour: A step forward in traffic safety research

    NARCIS (Netherlands)

    Horst, A.R.A. van der

    2013-01-01

    TNO conducted long-term video observations to collect data on the pre-crash phase of real accidents (what exactly happened just before the collision?). The video recordings of collisions were used to evaluate and validate the safety value of in-depth accident analyses, road scene analyses, and

  3. Onboard Systems Record Unique Videos of Space Missions

    Science.gov (United States)

    2010-01-01

    Ecliptic Enterprises Corporation, headquartered in Pasadena, California, provided onboard video systems for rocket and space shuttle launches before it was tasked by Ames Research Center to craft the Data Handling Unit that would control sensor instruments onboard the Lunar Crater Observation and Sensing Satellite (LCROSS) spacecraft. The technological capabilities the company acquired on this project, as well as those gained developing a high-speed video system for monitoring the parachute deployments for the Orion Pad Abort Test Program at Dryden Flight Research Center, have enabled the company to offer high-speed and high-definition video for geosynchronous satellites and commercial space missions, providing remarkable footage that both informs engineers and inspires the imagination of the general public.

  4. Mathematics from Still and Video Images.

    Science.gov (United States)

    Oldknow, Adrian

    2003-01-01

    Discusses simple tools for digitizing objects of interest from image files for treatment in other software such as graph plotters, data-handling software, or graphic calculators. Explores methods using MS Paint, Excel, DigitiseImage and TI Interactive (TII). (Author/NB)

  5. Recent advances in intelligent image search and video retrieval

    CERN Document Server

    2017-01-01

    This book initially reviews the major feature representation and extraction methods and effective learning and recognition approaches, which have broad applications in the context of intelligent image search and video retrieval. It subsequently presents novel methods, such as improved soft assignment coding, Inheritable Color Space (InCS) and the Generalized InCS framework, the sparse kernel manifold learner method, the efficient Support Vector Machine (eSVM), and the Scale-Invariant Feature Transform (SIFT) features in multiple color spaces. Lastly, the book presents clothing analysis for subject identification and retrieval, and performance evaluation methods of video analytics for traffic monitoring. Digital images and videos are proliferating at an amazing speed in the fields of science, engineering and technology, media and entertainment. With the huge accumulation of such data, keyword searches and manual annotation schemes may no longer be able to meet the practical demand for retrieving relevant conte...

  6. Literacy in the 21st Century: The Fourth R--Video Recording

    Science.gov (United States)

    Siegle, Del

    2009-01-01

    Young people are surrounded by visual images, and they naturally are drawn to viewing and creating videos. Educators are remiss if they do not seize on this significant communication and learning tool. Similar to projects involving writing, video projects allow students to communicate their ideas, thoughts, and feelings. Therefore, most…

  7. An Introduction to Recording, Editing, and Streaming Picture-in-Picture Ultrasound Videos.

    Science.gov (United States)

    Rajasekaran, Sathish; Hall, Mederic M; Finnoff, Jonathan T

    2016-08-01

    This paper describes the process by which high-definition resolution (up to 1920 × 1080 pixels) ultrasound video can be captured in conjunction with high-definition video of the transducer position (picture-in-picture). In addition, we describe how to edit the recorded video feeds to combine both feeds, and to crop, resize, split, stitch, cut, annotate videos, and also change the frame rate, insert pictures, edit the audio feed, and use chroma keying. We also describe how to stream a picture-in-picture ultrasound feed during a videoconference. Copyright © 2016 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  8. The advantages of using photographs and video images in ...

    African Journals Online (AJOL)

    Background: The purpose of this study was to evaluate the advantages of a telephone consultation with a specialist in paediatric surgery after taking photographs and video images by a general practitioner for the diagnosis of some diseases. Materials and Methods: This was a prospective study of the reliability of paediatric ...

  9. Low-noise video amplifiers for imaging CCD's

    Science.gov (United States)

    Scinicariello, F.

    1976-01-01

    Various techniques were developed which enable the CCD (charge coupled device) imaging array user to obtain optimum performance from the device. A CCD video channel was described, and detector-preamplifier interface requirements were examined. A noise model for the system was discussed at length and laboratory data presented and compared to predicted results.

  10. High-sensitivity hyperspectral imager for biomedical video diagnostic applications

    Science.gov (United States)

    Leitner, Raimund; Arnold, Thomas; De Biasio, Martin

    2010-04-01

    Video endoscopy allows physicians to visually inspect inner regions of the human body using a camera and only minimal invasive optical instruments. It has become an every-day routine in clinics all over the world. Recently a technological shift was done to increase the resolution from PAL/NTSC to HDTV. But, despite a vast literature on invivo and in-vitro experiments with multi-spectral point and imaging instruments that suggest that a wealth of information for diagnostic overlays is available in the visible spectrum, the technological evolution from colour to hyper-spectral video endoscopy is overdue. There were two approaches (NBI, OBI) that tried to increase the contrast for a better visualisation by using more than three wavelengths. But controversial discussions about the real benefit of a contrast enhancement alone, motivated a more comprehensive approach using the entire spectrum and pattern recognition algorithms. Up to now the hyper-spectral equipment was too slow to acquire a multi-spectral image stack at reasonable video rates rendering video endoscopy applications impossible. Recently, the availability of fast and versatile tunable filters with switching times below 50 microseconds made an instrumentation for hyper-spectral video endoscopes feasible. This paper describes a demonstrator for hyper-spectral video endoscopy and the results of clinical measurements using this demonstrator for measurements after otolaryngoscopic investigations and thorax surgeries. The application investigated here is the detection of dysplastic tissue, although hyper-spectral video endoscopy is of course not limited to cancer detection. Other applications are the detection of dysplastic tissue or polyps in the colon or the gastrointestinal tract.

  11. A beamforming video recorder for integrated observations of dolphin behavior and vocalizations (L)

    Science.gov (United States)

    Ball, Keenan R.; Buck, John R.

    2005-03-01

    In this Letter we describe a beamforming video recorder consisting of a video camera at the center of a 16 hydrophone array. A broadband frequency-domain beamforming algorithm is used to estimate the azimuth and elevation of each detected sound. These estimates are used to generate a visual cue indicating the location of the sound source within the video recording, which is synchronized to the acoustic data. The system provided accurate results in both lab calibrations and a field test. The system allows researchers to correlate the acoustic and physical behaviors of marine mammals during studies of social interactions. .

  12. Image-guided transorbital procedures with endoscopic video augmentation.

    Science.gov (United States)

    DeLisi, Michael P; Mawn, Louise A; Galloway, Robert L

    2014-09-01

    Surgical interventions to the orbital space behind the eyeball are limited to highly invasive procedures due to the confined nature of the region along with the presence of several intricate soft tissue structures. A minimally invasive approach to orbital surgery would enable several therapeutic options, particularly new treatment protocols for optic neuropathies such as glaucoma. The authors have developed an image-guided system for the purpose of navigating a thin flexible endoscope to a specified target region behind the eyeball. Navigation within the orbit is particularly challenging despite its small volume, as the presence of fat tissue occludes the endoscopic visual field while the surgeon must constantly be aware of optic nerve position. This research investigates the impact of endoscopic video augmentation to targeted image-guided navigation in a series of anthropomorphic phantom experiments. A group of 16 surgeons performed a target identification task within the orbits of four skull phantoms. The task consisted of identifying the correct target, indicated by the augmented video and the preoperative imaging frames, out of four possibilities. For each skull, one orbital intervention was performed with video augmentation, while the other was done with the standard image guidance technique, in random order. The authors measured a target identification accuracy of 95.3% and 85.9% for the augmented and standard cases, respectively, with statistically significant improvement in procedure time (Z=-2.044, p=0.041) and intraoperator mean procedure time (Z=2.456, p=0.014) when augmentation was used. Improvements in both target identification accuracy and interventional procedure time suggest that endoscopic video augmentation provides valuable additional orientation and trajectory information in an image-guided procedure. Utilization of video augmentation in transorbital interventions could further minimize complication risk and enhance surgeon comfort and

  13. Recognition of Bullet Holes Based on Video Image Analysis

    Science.gov (United States)

    Ruolin, Zhu; Jianbo, Liu; Yuan, Zhang; Xiaoyu, Wu

    2017-10-01

    The technology of computer vision is used in the training of military shooting. In order to overcome the limitation of the bullet holes recognition using Video Image Analysis that exists over-detection or leak-detection, this paper adopts the support vector machine algorithm and convolutional neural network to extract and recognize Bullet Holes in the digital video and compares their performance. It extracts HOG characteristics of bullet holes and train SVM classifier quickly, though the target is under outdoor environment. Experiments show that support vector machine algorithm used in this paper realize a fast and efficient extraction and recognition of bullet holes, improving the efficiency of shooting training.

  14. High definition in minimally invasive surgery: a review of methods for recording, editing, and distributing video.

    Science.gov (United States)

    Kelly, Christopher R; Hogle, Nancy J; Landman, Jaime; Fowler, Dennis L

    2008-09-01

    The use of high-definition cameras and monitors during minimally invasive procedures can provide the surgeon and operating team with more than twice the resolution of standard definition systems. Although this dramatic improvement in visualization offers numerous advantages, the adoption of high definition cameras in the operating room can be challenging because new recording equipment must be purchased, and several new technologies are required to edit and distribute video. The purpose of this review article is to provide an overview of the popular methods for recording, editing, and distributing high-definition video. This article discusses the essential technical concepts of high-definition video, reviews the different kinds of equipment and methods most often used for recording, and describes several options for video distribution.

  15. New methodical approach to the assessment of video record which is used when training of judoists

    Directory of Open Access Journals (Sweden)

    Konstantin Ananchenko

    2016-09-01

    Full Text Available Purpose: to offer a new methodical approach for the assessment of video record which is used when training of judoists. Material & Methods: the assessment of video record, which is used in the course of training of judoists, was carried out in the research; the poll of 23 masters of sports of Ukraine and masters of sports of international class. Results: flexibility of a new methodical approach for the video record assessment is proved. Methodical approach assumes the use of unique mathematical apparatus – methods of pair comparisons and arrangement of priorities. It can be used for the assessment of video films for judoists of the various skill level, age, physical parameters for individual training of certain judoists at the correct selection of parameters of comparison. Conclusions: the use of the given methodical approach will promote the increase of efficiency of the competitive activity and coach's work, will allow judoists to reach high levels of individual skill.

  16. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    Science.gov (United States)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  17. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Ji Ming

    2008-03-01

    Full Text Available We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  18. The history of consumer magnetic video tape recording, from a rarity to a mass product

    NARCIS (Netherlands)

    Luitjens, S.B.; Rijckaert, A.M.A.

    1999-01-01

    Since the Þrst experiments on magnetic recording by Valdemar Poulsen in 1898 the use of this technology has grown tremendously and magnetic storage is used in almost every home in the world. A special challenge was the recording of video signals which need a high bandwidth. In the 1950s, television

  19. Introducing video recording in primary care midwifery for research purposes: procedure, dataset, and use.

    Science.gov (United States)

    Spelten, Evelien R; Martin, Linda; Gitsels, Janneke T; Pereboom, Monique T R; Hutton, Eileen K; van Dulmen, Sandra

    2015-01-01

    video recording studies have been found to be complex; however very few studies describe the actual introduction and enrolment of the study, the resulting dataset and its interpretation. In this paper we describe the introduction and the use of video recordings of health care provider (HCP)-client interactions in primary care midwifery for research purposes. We also report on the process of data management, data coding and the resulting data set. we describe our experience in undertaking a study using video recording to assess the interaction of the midwife and her client in the first antenatal consultation, in a real life clinical practice setting in the Netherlands. Midwives from six practices across the Netherlands were recruited to videotape 15-20 intakes. The introduction, complexity of the study and intrusiveness of the study were discussed within the research group. The number of valid recordings and missing recordings was measured; reasons not to participate, non-response analyses, and the inter-rater reliability of the coded videotapes were assessed. Video recordings were supplemented by questionnaires for midwives and clients. The Roter Interaction Analysis System (RIAS) was used for coding as well as an obstetric topics scale. at the introduction of the study, more initial hesitation in co-operation was found among the midwives than among their clients. The intrusive nature of the recording on the interaction was perceived to be minimal. The complex nature of the study affected recruitment and data collection. Combining the dataset with the questionnaires and medical records proved to be a challenge. The final dataset included videotapes of 20 midwives (7-23 recordings per midwife). Of the 460 eligible clients, 324 gave informed consent. The study resulted in a significant dataset of first antenatal consultations involving recording 269 clients and 194 partners. video recording of midwife-client interaction was both feasible and challenging and resulted

  20. Registration and recognition in images and videos

    CERN Document Server

    Battiato, Sebastiano; Farinella, Giovanni

    2014-01-01

    Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art  research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems.  The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year.This edited volume contains a selection of articles covering some of the talks and tutorials held during the last editions of the school. The chapters provide an in-depth overview o...

  1. Video-rate optical coherence tomography imaging with smart pixels

    Science.gov (United States)

    Beer, Stephan; Waldis, Severin; Seitz, Peter

    2003-10-01

    A novel concept for video-rate parallel acquisition of optical coherence tomography imaging is presented based on in-pixel demodulation. The main restrictions for parallel detection such as data rate, power consumption, circuit size and poor sensitivity are overcome with a smart pixel architecture incorporating an offset compensation circuit, a synchronous sampling stage, programmable time averaging and random pixel accessing, allowing envelope and phase detection in large 1D and 2D arrays.

  2. Survey on attacks in image and video watermarking

    Science.gov (United States)

    Vassaux, Boris; Nguyen, Philippe; Baudry, Severine; Bas, Patrick; Chassery, Jean-Marc

    2002-11-01

    Watermarking techniques have been considerably improved for the last past years, aiming at being always more resistant to attacks. In fact, if the main goal of watermarking at the beginning was to secure digital data (audio, image and video), numerous attacks are still now able to cast doubts on the owner's authenticity ; we can distinguish three different groups of attacks : these one which consist to remove the watermark, these one which aim at impairing the data sufficiently to falsify the detection, and finally these one which try to alter the detection process so that another person becomes the apparent owner of the data. By considering the growing development of always more efficient attacks, this paper firstly presents a recent and exhaustive review of attacks in image and video watermarking. In a second part, the consequences of still image watermarking attacks on video sequences will be outlined and a particular attention will be given to the recently created benchmarks : Stirmark, the benchmark proposed by the University of Geneva Vision Group, this one proposed by the Department of Informatics of the University of Thessaloniki and finally we will speak of the current work of the European Project Certimark ; we will present a comparison of these various benchmarks and show how difficult it is to develop a self-sufficient benchmark, especially because of the complexity of intentional attacks.

  3. Measurement of thigmomorphogenesis and gravitropism by non-intrusive computerized video image processing

    Science.gov (United States)

    Jaffe, M. J.

    1984-01-01

    A video image processing instrument, DARWIN (Digital Analyser of Resolvable Whole-pictures by Image Numeration), was developed. It was programmed to measure stem or root growth and bending, and coupled to a specially mounted video camera to be able to automatically generate growth and bending curves during gravitropism. The growth of the plant is recorded on a video casette recorder with a specially modified time lapse function. At the end of the experiment, DARWIN analyses the growth or movement and prints out bending and growth curves. This system was used to measure thigmomorphagenesis in light grown corn plants. If the plant is rubbed with an applied force load of 0.38 N., it grows faster than the unrubbed control, whereas 1.14 N. retards its growth. Image analysis shows that most of the change in the rate of growth is caused in the first hour after rubbing. When DARWIN was used to measure gravitropism in dark grown oat seedlings, it was found that the top side of the shoot contracts during the first hour of gravitational stimulus, whereas the bottom side begins to elongate after 10 to 15 minutes.

  4. The history of consumer magnetic video tape recording, from a rarity to a mass product

    Science.gov (United States)

    Luitjens, S. B.; Rijckaert, A. M. A.

    1999-03-01

    Since the first experiments on magnetic recording by Valdemar Poulsen in 1898 the use of this technology has grown tremendously and magnetic storage is used in almost every home in the world. A special challenge was the recording of video signals which need a high bandwidth. In the 1950s, television broadcasts had started which created a need for storage in the broadcast world. The first broadcast recorder was the Quadruplex from Ampex in 1956. Later solutions were found for application in the consumer market. Better mechanics, magnetic tapes and recording heads allowed the mass production of a cheap consumer recorder. The size and weight decreased tremendously and portable camcorders are very common. Recording of broadcasts, video rental and home movies are now very popular. The factors which contributed to the maturing of this technology will be reviewed in this paper.

  5. Translation between written spanish and video-recorded uruguayan sign language: a new challenge

    Directory of Open Access Journals (Sweden)

    Leonardo Peluso

    2015-12-01

    Full Text Available This paper deals with the concept of deferred textuality which is more comprehensive than writing, as it allows the inclusion of sign language video-recordings. If one considers that sign language video-recordings are deferred textuality, it can also be argued that a literate culture can be developed around them, understood as the culture built around deferred textuality and the social and institutional practices they spawn. From this idea, I will show that the translation between oral languages written texts and sign languages video-recorded texts is also possible. I will also point out that, in the case of Uruguay and its sign language (LSU, this kind or translation is imperative in front of an increasingly demanding Deaf Community, occupying new social spaces.

  6. Translation between written spanish and video-recorded uruguayan sign language: a new challenge

    Directory of Open Access Journals (Sweden)

    Leonardo Peluso

    2015-10-01

    Full Text Available This paper deals with the concept of deferred textuality which is more comprehensive than writing, as it allows the inclusion of sign language video-recordings. If one considers that  sign language video-recordings are deferred textuality, it can also be argued that  a literate culture can be developed around them, understood as the culture built around deferred textuality and the social and institutional practices they spawn. From this idea, I will show that the translation between oral languages written texts and sign languages video-recorded texts is also possible. I will also point out that, in the case of Uruguay and its sign language (LSU, this kind or translation is imperative in front of an increasingly demanding Deaf Community, occupying new social spaces.

  7. Feature Extraction in IR Images Via Synchronous Video Detection

    Science.gov (United States)

    Shepard, Steven M.; Sass, David T.

    1989-03-01

    IR video images acquired by scanning imaging radiometers are subject to several problems which make measurement of small temperature differences difficult. Among these problems are 1) aliasing, which occurs When events at frequencies higher than the video frame rate are observed, 2) limited temperature resolution imposed by the 3-bit digitization available in existing commercial systems, and 3) susceptibility to noise and background clutter. Bandwidth narrowing devices (e.g. lock-in amplifiers or boxcar averagers) are routinely used to achieve a high degree of signal to noise improvement for time-varying 1-dimensional signals. We will describe techniques which allow similar S/N improvement for 2-dimensional imagery acquired with an off the shelf scanning imaging radiometer system. These techniques are iplemented in near-real-time, utilizing a microcomputer and specially developed hardware and software . We will also discuss the application of the system to feature extraction in cluttered images, and to acquisition of events which vary faster than the frame rate.

  8. Videorec as gameplay: Recording playthroughs and video game engagement

    Directory of Open Access Journals (Sweden)

    Gabriel Menotti

    2014-03-01

    Full Text Available This paper outlines an alternative genealogy of “non-narrative machinima” by the means of tracing a parallel with different cinematographic genres. It analyses the circuit of production and distribution of such material as a field for modes of superplay, in which users both compete and collaborate. Doing so, it proposes that the recording of playthroughs, a practice seemingly secondary to videogame consumption, might constitute an essential part of its culture and development, creating meaningful interfaces between players and industries.

  9. The client’s ideas and fantasies of the supervisor in video recorded psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Jensen, Karen Boelt; Madsen, Ninna Skov

    2010-01-01

    Aim: Despite the current relatively widespread use of video as a supervisory tool, there are few empirical studies on how recordings influence the relationship between client and supervisor. This paper presents a qualitative, explorative study of clients’ experience of having their psychotherapy...... on three related topics; a) the clients’ experience of the video recordings’ influence on therapy; b) the therapeutic relationship and c) fantasy relationship with supervisor. The answers were analyzed in accordance to Hill et al.’s (1997; 2005) guidelines for conducting Consensual Qualitative Research...... by two independent researchers and a third auditor. Furthermore, clients rated the overall influence of video recording and the influence across time on Likert scales. Results: This paper only focus on the results concerning the clients’ fantasy relationship to the supervisor. In general the clients...

  10. An integrable, web-based solution for easy assessment of video-recorded performances

    DEFF Research Database (Denmark)

    Subhi, Yousif; Todsen, Tobias; Konge, Lars

    2014-01-01

    Assessment of clinical competencies by direct observation is problematic for two main reasons the identity of the examinee influences the assessment scores, and direct observation demands experts at the exact location and the exact time. Recording the performance can overcome these problems......; however, managing video recordings and assessment sheets is troublesome and may lead to missing or incorrect data. Currently, no existing software solution can provide a local solution for the management of videos and assessments but this is necessary as assessment scores are confidential information......, and access to this information should be restricted to select personnel. A local software solution may also ease the need for customization to local needs and integration into existing user databases or project management software. We developed an integrable web-based solution for easy assessment of video...

  11. Seizure semiology inferred from clinical descriptions and from video recordings. How accurate are they?

    DEFF Research Database (Denmark)

    Beniczky, Simona Alexandra; Fogarasi, András; Neufeld, Miri

    2012-01-01

    To assess how accurate the interpretation of seizure semiology is when inferred from witnessed seizure descriptions and from video recordings, five epileptologists analyzed 41 seizures from 30 consecutive patients who had clinical episodes in the epilepsy monitoring unit. For each clinical episode...

  12. Assessing Nonverbal Communication Skills through Video Recording and Debriefing of Clinical Skill Simulation Exams

    Science.gov (United States)

    Heinerichs, Scott; Cattano, Nicole M.; Morrison, Katherine E.

    2013-01-01

    Context: Nonverbal communication (NVC) skills are a critical component to clinician interactions with patients, and no research exists on the investigation of athletic training students' nonverbal communication skills. Video recording and debriefing have been identified as methods to assess and educate students' NVC skills in other allied health…

  13. The impact of online video lecture recordings and automated feedback on student performance

    NARCIS (Netherlands)

    Wieling, M. B.; Hofman, W. H. A.

    To what extent a blended learning configuration of face-to-face lectures, online on-demand video recordings of the face-to-face lectures and the offering of online quizzes with appropriate feedback has an additional positive impact on the performance of these students compared to the traditional

  14. An Intensive Presentations Course in English for Aeronautical Engineering Students Using Cyclic Video Recordings

    Science.gov (United States)

    Tatzl, Dietmar

    2017-01-01

    This article presents the design and evaluation of an intensive presentations course for aeronautical engineering students based on cyclic video recordings. The target group of this course in English for specific purposes (ESP) were undergraduate final-year students who needed to improve their presentation and foreign language skills to prepare…

  15. The air gap between tape and drum in a video recorder

    NARCIS (Netherlands)

    Moes, H.

    1991-01-01

    Lubrication with ambient air is not quite generally applied. The best known application is the "oil bearing" in tape recording systems for audio, video and computer applications; where the gap height that is needed for effective lubrication may quite easily be attained. This air gap reduces tape

  16. Use of low-cost video recording device in reflective practice in cataract surgery.

    Science.gov (United States)

    Bhogal, Maninder M; Angunawela, Romesh I; Little, Brian C

    2010-04-01

    Reflective surgical practice is invaluable for surgeons at all levels of experience. For trainees in particular, every surgical opportunity must be optimized for its learning potential. Recording and reviewing cataract surgery is an invaluable tool. We describe a video recording device that has the advantages of ease of use; low cost; portability; and ease of review, editing, and dissemination, all of which encourage regular use and reflective surgical practice. Copyright (c) 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  17. Video Recording With a GoPro in Hand and Upper Extremity Surgery.

    Science.gov (United States)

    Vara, Alexander D; Wu, John; Shin, Alexander Y; Sobol, Gregory; Wiater, Brett

    2016-10-01

    Video recordings of surgical procedures are an excellent tool for presentations, analyzing self-performance, illustrating publications, and educating surgeons and patients. Recording the surgeon's perspective with high-resolution video in the operating room or clinic has become readily available and advances in software improve the ease of editing these videos. A GoPro HERO 4 Silver or Black was mounted on a head strap and worn over the surgical scrub cap, above the loupes of the operating surgeon. Five live surgical cases were recorded with the camera. The videos were uploaded to a computer and subsequently edited with iMovie or the GoPro software. The optimal settings for both the Silver and Black editions, when operating room lights are used, were determined to be a narrow view, 1080p, 60 frames per second (fps), spot meter on, protune on with auto white balance, exposure compensation at -0.5, and without a polarizing lens. When the operating room lights were not used, it was determined that the standard settings for a GoPro camera were ideal for positioning and editing (4K, 15 frames per second, spot meter and protune off). The GoPro HERO 4 provides high-quality, the surgeon perspective, and a cost-effective video recording of upper extremity surgical procedures. Challenges include finding the optimal settings for each surgical procedure and the length of recording due to battery life limitations. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  18. CONTEXT-BASED URBAN TERRAIN RECONSTRUCTION FROM IMAGES AND VIDEOS

    Directory of Open Access Journals (Sweden)

    D. Bulatov

    2012-07-01

    Full Text Available Detection of buildings and vegetation, and even more reconstruction of urban terrain from sequences of aerial images and videos is known to be a challenging task. It has been established that those methods that have as input a high-quality Digital Surface Model (DSM, are more straight-forward and produce more robust and reliable results than those image-based methods that require matching line segments or even whole regions. This motivated us to develop a new dense matching technique for DSM generation that is capable of simultaneous integration of multiple images in the reconstruction process. The DSMs generated by this new multi-image matching technique can be used for urban object extraction. In the first contribution of this paper, two examples of external sources of information added to the reconstruction pipeline will be shown. The GIS layers are used for recognition of streets and suppressing false alarms in the depth maps that were caused by moving vehicles while the near infrared channel is applied for separating vegetation from buildings. Three examples of data sets including both UAV-borne video sequences with a relatively high number of frames and high-resolution (10 cm ground sample distance data sets consisting of (few spatial-temporarily diverse images from large-format aerial frame cameras, will be presented. By an extensive quantitative evaluation of the Vaihingen block from the ISPRS benchmark on urban object detection, it will become clear that our procedure allows a straight-forward, efficient, and reliable instantiation of 3D city models.

  19. A System for Reflective Learning Using Handwriting Tablet Devices for Real-Time Event Bookmarking into Simultaneously Recorded Videos

    Science.gov (United States)

    Nakajima, Taira

    2012-01-01

    The author demonstrates a new system useful for reflective learning. Our new system offers an environment that one can use handwriting tablet devices to bookmark symbolic and descriptive feedbacks into simultaneously recorded videos in the environment. If one uses video recording and feedback check sheets in reflective learning sessions, one can…

  20. Observing the Testing Effect using Coursera Video-Recorded Lectures: A Preliminary Study.

    Science.gov (United States)

    Yong, Paul Zhihao; Lim, Stephen Wee Hun

    2015-01-01

    We investigated the testing effect in Coursera video-based learning. One hundred and twenty-three participants either (a) studied an instructional video-recorded lecture four times, (b) studied the lecture three times and took one recall test, or (c) studied the lecture once and took three tests. They then took a final recall test, either immediately or a week later, through which their learning was assessed. Whereas repeated studying produced better recall performance than did repeated testing when the final test was administered immediately, testing produced better performance when the final test was delayed until a week after. The testing effect was observed using Coursera lectures. Future directions are documented.

  1. Audiovisual presentation of video-recorded stimuli at a high frame rate.

    Science.gov (United States)

    Lidestam, Björn

    2014-06-01

    A method for creating and presenting video-recorded synchronized audiovisual stimuli at a high frame rate-which would be highly useful for psychophysical studies on, for example, just-noticeable differences and gating-is presented. Methods for accomplishing this include recording audio and video separately using an exact synchronization signal, editing the recordings and finding exact synchronization points, and presenting the synchronized audiovisual stimuli with a desired frame rate on a cathode ray tube display using MATLAB and Psychophysics Toolbox 3. The methods from an empirical gating study (Moradi, Lidestam, & Rönnberg, Frontiers in Psychology 4:359, 2013) are presented as an example of the implementation of playback at 120 fps.

  2. A comparison of camera trap and permanent recording video camera efficiency in wildlife underpasses.

    Science.gov (United States)

    Jumeau, Jonathan; Petrod, Lana; Handrich, Yves

    2017-09-01

    In the current context of biodiversity loss through habitat fragmentation, the effectiveness of wildlife crossings, installed at great expense as compensatory measures, is of vital importance for ecological and socio-economic actors. The evaluation of these structures is directly impacted by the efficiency of monitoring tools (camera traps…), which are used to assess the effectiveness of these crossings by observing the animals that use them. The aim of this study was to quantify the efficiency of camera traps in a wildlife crossing evaluation. Six permanent recording video systems sharing the same field of view as six Reconyx HC600 camera traps installed in three wildlife underpasses were used to assess the exact proportion of missed events (event being the presence of an animal within the field of view), and the error rate concerning underpass crossing behavior (defined as either Entry or Refusal). A sequence of photographs was triggered by either animals (true trigger) or artefacts (false trigger). We quantified the number of false triggers that had actually been caused by animals that were not visible on the images ("false" false triggers). Camera traps failed to record 43.6% of small mammal events (voles, mice, shrews, etc.) and 17% of medium-sized mammal events. The type of crossing behavior (Entry or Refusal) was incorrectly assessed in 40.1% of events, with a higher error rate for entries than for refusals. Among the 3.8% of false triggers, 85% of them were "false" false triggers. This study indicates a global underestimation of the effectiveness of wildlife crossings for small mammals. Means to improve the efficiency are discussed.

  3. Using underwater video imaging as an assessment tool for coastal condition

    Science.gov (United States)

    As part of an effort to monitor ecological conditions in nearshore habitats, from 2009-2012 underwater videos were captured at over 400 locations throughout the Laurentian Great Lakes. This study focuses on developing a video rating system and assessing video images. This ratin...

  4. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  5. Tracking of multiple points using color video image analyzer

    Science.gov (United States)

    Nennerfelt, Leif

    1990-08-01

    The Videomex-X is a new product intended for use in biomechanical measurement. It tracks up to six points at 60 frames per second using colored markers placed on the subject. The system can be used for applications such as gait analysis, studying facial movements, or tracking the pattern of movements of individuals in a group. The Videomex-X is comprised of a high speed color image analyzer, an RBG color video camera, an IBM AT compatible computer and motion analysis software. The markers are made from brightly colored plastic disks and each marker is a different color. Since the markers are unique, the problem of misidentification of markers does not occur. The Videomex-X performs realtime analysis so that the researcher can get immediate feedback on the subject's performance. High speed operation is possible because the system uses distributed processing. The image analyzer is a hardwired parallel image processor which identifies the markers within the video picture and computes their x-y locations. The image analyzer sends the x-y coordinates to the AT computer which performs additional analysis and presents the result. The x-y coordinate data acquired during the experiment may be streamed to the computer's hard disk. This allows the data to be re-analyzed repeatedly using different analysis criteria. The original Videomex-X tracked in two dimensions. However, a 3-D system has recently been completed. The algorithm used by the system to derive performance results from the x-y coordinates is contained in a separate ASCII file. These files can be modified by the operator to produce the required type of data reduction.

  6. A video precipitation sensor for imaging and velocimetry of hydrometeors

    Science.gov (United States)

    Liu, X. C.; Gao, T. C.; Liu, L.

    2014-07-01

    A new method to determine the shape and fall velocity of hydrometeors by using a single CCD camera is proposed in this paper, and a prototype of a video precipitation sensor (VPS) is developed. The instrument consists of an optical unit (collimated light source with multi-mode fibre cluster), an imaging unit (planar array CCD sensor), an acquisition and control unit, and a data processing unit. The cylindrical space between the optical unit and imaging unit is sampling volume (300 mm × 40 mm × 30 mm). As the precipitation particles fall through the sampling volume, the CCD camera exposes twice in a single frame, which allows the double exposure of particles images to be obtained. The size and shape can be obtained by the images of particles; the fall velocity can be calculated by particle displacement in the double-exposure image and interval time; the drop size distribution and velocity distribution, precipitation intensity, and accumulated precipitation amount can be calculated by time integration. The innovation of VPS is that the shape, size, and velocity of precipitation particles can be measured by only one planar array CCD sensor, which can address the disadvantages of a linear scan CCD disdrometer and an impact disdrometer. Field measurements of rainfall demonstrate the VPS's capability to measure micro-physical properties of single particles and integral parameters of precipitation.

  7. Effect of clinical information and previous exam execution on observer agreement and reliability in the analysis of hysteroscopic video-recordings.

    Science.gov (United States)

    Martinho, Margarida Suzel Lopes; da Costa Santos, Cristina Maria Nogueira; Silva Carvalho, João Luís Mendonça; Bernardes, João Francisco Montenegro Andrade Lima

    2017-12-07

    Inter-observer agreement and reliability in hysteroscopic image assessment remain uncertain and the type of factors that may influence it has only been studied in relation to the experience of hysteroscopists. We aim to assess the effect of clinical information and previous exam execution on observer agreement and reliability in the analysis of hysteroscopic video-recordings. Ninety hysteroscopies were video-recorded and randomized into a group without (Group 1) and with clinical information (Group 2). The videos were independently analyzed by three hysteroscopists, regarding lesion location, dimension, and type, as well as decision to perform a biopsy. One of the hysteroscopists had executed all the exams before. Proportions of agreement (PA) and kappa statistics (κ) with 95% confidence intervals (95% CI) were used. In Group 2, there was a higher proportion of a normal diagnosis (p analysis of the video-recordings did not significantly affect the results. With clinical information, agreement and reliability in the overall analysis of hysteroscopic video-recordings may reach almost perfect results and this was not significantly affected by the execution of the exams before the analysis. However, there is still uncertainty in the analysis of specific endometrial cavity abnormalities.

  8. Video dosimetry: evaluation of X-radiation dose by video fluoroscopic image; Videodosimetria: avaliacao da dose da radiacao X atraves da imagem videofluroscopica

    Energy Technology Data Exchange (ETDEWEB)

    Nova, Joao Luiz Leocadio da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Centro de Ciencias da Saude. Nucleo de Tecnologia Educacional para a Saude; Lopes, Ricardo Tadeu [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Instrumentacao Nuclear

    1996-12-31

    A new methodology to evaluate the entrance surface dose on patients under radiodiagnosis is presented. A phantom is used in video fluoroscopic procedures in on line video signal system. The images are obtained from a Siemens Polymat 50 and are digitalized. The results show that the entrance surface dose can be obtained in real time from video imaging 3 refs., 2 figs., 2 tabs.

  9. Digital video recordings for training, assessment, and revalidation of surgical skills.

    Science.gov (United States)

    Gambadauro, Pietro; Magos, Adam

    2010-10-01

    Surgical training is undergoing drastic changes, and new strategies should be adopted to keep quality standards. The authors review and advocate the use of surgical recordings as a useful complement to current training, assessment, and revalidation modalities. For trainees, such recordings would promote quality-based and competence-based surgical training and allow for self-evaluation. Video logbooks could be used to aid interaction between trainer and trainee, and facilitate formative assessment. Recordings of surgery could also be integrated into trainees' portfolios and regular assessments. Finally, such recordings could make surgeons' revalidation more sensible. The routine use of records of surgical procedures could become an integral component of the standard of care. This would have been an unattractive suggestion until recently, as analogue recording techniques are inconvenient, cumbersome, and time consuming. Today, however, with the advent of inexpensive digital technologies, such a concept is realistic and is likely to improve patient care.

  10. Using Grounded Theory to Analyze Qualitative Observational Data that is Obtained by Video Recording

    Directory of Open Access Journals (Sweden)

    Colin Griffiths

    2013-06-01

    Full Text Available This paper presents a method for the collection and analysis of qualitative data that is derived by observation and that may be used to generate a grounded theory. Video recordings were made of the verbal and non-verbal interactions of people with severe and complex disabilities and the staff who work with them. Three dyads composed of a student/teacher or carer and a person with a severe or profound intellectual disability were observed in a variety of different activities that took place in a school. Two of these recordings yielded 25 minutes of video, which was transcribed into narrative format. The nature of the qualitative micro data that was captured is described and the fit between such data and classic grounded theory is discussed. The strengths and weaknesses of the use of video as a tool to collect data that is amenable to analysis using grounded theory are considered. The paper concludes by suggesting that using classic grounded theory to analyze qualitative data that is collected using video offers a method that has the potential to uncover and explain patterns of non-verbal interactions that were not previously evident.

  11. Automatic flame tracking technique for atrium fire from video images

    Science.gov (United States)

    Li, Jin; Lu, Puyi; Fong, Naikong; Chow, Wanki; Wong, Lingtim; Xu, Dianguo

    2005-02-01

    Smoke control is one of the important aspects in atrium fire. For an efficient smoke control strategy, it is very important to identify the smoke and fire source in a very short period of time. However, traditional methods such as point type detectors are not effective for smoke and fire detection in large space such as atrium. Therefore, video smoke and fire detection systems are proposed. For the development of the system, automatic extraction and tracking of flame are two important problems needed to be solved. Based on entropy theory, region growing and Otsu method, a new automatic integrated algorithm, which is used to track flame from video images, is proposed in this paper. It can successfully identify flames from different environment, different background and in different form. The experimental results show that this integrated algorithm has stronger robustness and wider adaptability. In addition, because of the low computational demand of this algorithm, it is also possible to be used as part of a robust, real-time smoke and fire detection system.

  12. Tracking cells in Life Cell Imaging videos using topological alignments

    Directory of Open Access Journals (Sweden)

    Ersoy Ilker

    2009-07-01

    Full Text Available Abstract Background With the increasing availability of live cell imaging technology, tracking cells and other moving objects in live cell videos has become a major challenge for bioimage informatics. An inherent problem for most cell tracking algorithms is over- or under-segmentation of cells – many algorithms tend to recognize one cell as several cells or vice versa. Results We propose to approach this problem through so-called topological alignments, which we apply to address the problem of linking segmentations of two consecutive frames in the video sequence. Starting from the output of a conventional segmentation procedure, we align pairs of consecutive frames through assigning sets of segments in one frame to sets of segments in the next frame. We achieve this through finding maximum weighted solutions to a generalized "bipartite matching" between two hierarchies of segments, where we derive weights from relative overlap scores of convex hulls of sets of segments. For solving the matching task, we rely on an integer linear program. Conclusion Practical experiments demonstrate that the matching task can be solved efficiently in practice, and that our method is both effective and useful for tracking cells in data sets derived from a so-called Large Scale Digital Cell Analysis System (LSDCAS. Availability The source code of the implementation is available for download from http://www.picb.ac.cn/patterns/Software/topaln.

  13. Linking the sounds of dolphins to their locations and behavior using video and multichannel acoustic recordings.

    Science.gov (United States)

    Thomas, Rebecca E; Fristrup, Kurt M; Tyack, Peter L

    2002-10-01

    It is difficult to attribute underwater animal sounds to the individuals producing them. This paper presents a system developed to solve this problem for dolphins by linking acoustic locations of the sounds of captive bottlenose dolphins with an overhead video image. A time-delay beamforming algorithm localized dolphin sounds obtained from an array of hydrophones dispersed around a lagoon. The localized positions of vocalizing dolphins were projected onto video images. The performance of the system was measured for artificial calibration signals as well as for dolphin sounds. The performance of the system for calibration signals was analyzed in terms of acoustic localization error, video projection error, and combined acoustic localization and video error. The 95% confidence bounds for these were 1.5, 2.1, and 2.1 m, respectively. Performance of the system was analyzed for three types of dolphin sounds: echolocation clicks, whistles, and burst-pulsed sounds. The mean errors for these were 0.8, 1.3, and 1.3 m, respectively. The 95% confidence bound for all vocalizations was 2.8 m, roughly the length of an adult bottlenose dolphin. This system represents a significant advance for studying the function of vocalizations of marine animals in relation to their context, as the sounds can be identified to the vocalizing dolphin and linked to its concurrent behavior.

  14. Patients' statements and experiences concerning receiving mechanical ventilation: a prospective video-recorded study.

    Science.gov (United States)

    Karlsson, Veronika; Lindahl, Berit; Bergbom, Ingegerd

    2012-09-01

    Prospective studies using video-recordings of patients during mechanical ventilator treatment (MVT) while conscious have not previously been published. The aim was to describe patients' statements, communication and facial expressions during a video-recorded interview while undergoing MVT. Content analysis and hermeneutics inspired by the philosophy of Gadamer were used. The patients experienced almost constant difficulties in breathing and lost their voice. The most common types of communication techniques patients used were nodding or shaking the head. Their expressions were interpreted as stiffened facial expression, tense body position and feelings of sadness and sorrow. Nursing care for patients' conscious during MVT is challenging as it creates new demands regarding the content of the care provided. In caring for patients undergoing MVT while conscious, establishing a caring relationship, making patients feel safe and helping them to communicate seem to be most important for alleviating discomfort and instilling hope. © 2011 Blackwell Publishing Ltd.

  15. The Video Mesh: A Data Structure for Image-based Three-dimensional Video Editing

    OpenAIRE

    Chen, Jiawen; Paris, Sylvain; Wang, Jue; Matusik, Wojciech; Cohen, Michael; Durand, Fredo

    2011-01-01

    This paper introduces the video mesh, a data structure for representing video as 2.5D “paper cutouts.” The video mesh allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. The video mesh sparsely encodes optical flow as well as depth, and handles occlusion using local layering and alpha mattes. Motion is described by a sparse set of points tracked over time. Each point also stores a depth value. The video mesh is a trian...

  16. Quantification of Urine Elimination Behaviors in Cats with a Video Recording System

    OpenAIRE

    R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J M

    2017-01-01

    Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...

  17. Cryptanalysis of a spatiotemporal chaotic image/video cryptosystem

    Energy Technology Data Exchange (ETDEWEB)

    Rhouma, Rhouma [6' com laboratory, Ecole Nationale d' Ingenieurs de Tunis (ENIT) (Tunisia)], E-mail: rhoouma@yahoo.fr; Belghith, Safya [6' com laboratory, Ecole Nationale d' Ingenieurs de Tunis (ENIT) (Tunisia)

    2008-09-01

    This Letter proposes two different attacks on a recently proposed chaotic cryptosystem for images and videos in [S. Lian, Chaos Solitons Fractals (2007), (doi: 10.1016/j.chaos.2007.10.054)]. The cryptosystem under study displays weakness in the generation of the keystream. The encryption is made by generating a keystream mixed with blocks generated from the plaintext and the ciphertext in a CBC mode design. The so obtained keystream remains unchanged for every encryption procedure. Guessing the keystream leads to guessing the key. Two possible attacks are then able to break the whole cryptosystem based on this drawback in generating the keystream. We propose also to change the description of the cryptosystem to be robust against the described attacks by making it in a PCBC mode design.

  18. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    Science.gov (United States)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  19. A reduced-reference perceptual image and video quality metric based on edge preservation

    Science.gov (United States)

    Martini, Maria G.; Villarini, Barbara; Fiorucci, Federico

    2012-12-01

    In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence--prior to compression and transmission--is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric.

  20. Digital Path Approach Despeckle Filter for Ultrasound Imaging and Video

    Directory of Open Access Journals (Sweden)

    Marek Szczepański

    2017-01-01

    Full Text Available We propose a novel filtering technique capable of reducing the multiplicative noise in ultrasound images that is an extension of the denoising algorithms based on the concept of digital paths. In this approach, the filter weights are calculated taking into account the similarity between pixel intensities that belongs to the local neighborhood of the processed pixel, which is called a path. The output of the filter is estimated as the weighted average of pixels connected by the paths. The way of creating paths is pivotal and determines the effectiveness and computational complexity of the proposed filtering design. Such procedure can be effective for different types of noise but fail in the presence of multiplicative noise. To increase the filtering efficiency for this type of disturbances, we introduce some improvements of the basic concept and new classes of similarity functions and finally extend our techniques to a spatiotemporal domain. The experimental results prove that the proposed algorithm provides the comparable results with the state-of-the-art techniques for multiplicative noise removal in ultrasound images and it can be applied for real-time image enhancement of video streams.

  1. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  2. Initial evaluation of prospective cardiac triggering using photoplethysmography signals recorded with a video camera compared to pulse oximetry and electrocardiography at 7T MRI.

    Science.gov (United States)

    Spicher, Nicolai; Kukuk, Markus; Maderwald, Stefan; Ladd, Mark E

    2016-11-24

    Accurate synchronization between magnetic resonance imaging data acquisition and a subject's cardiac activity ("triggering") is essential for reducing image artifacts but conventional, contact-based methods for this task are limited by several factors, including preparation time, patient inconvenience, and susceptibility to signal degradation. The purpose of this work is to evaluate the performance of a new contact-free triggering method developed with the aim to eventually replace conventional methods in non-cardiac imaging applications. In this study, the method's performance is evaluated in the context of 7 Tesla non-enhanced angiography of the lower extremities. Our main contribution is a basic algorithm capable of estimating in real-time the phase of the cardiac cycle from reflection photoplethysmography signals obtained from skin color variations of the forehead recorded with a video camera. Instead of finding the algorithm's parameters heuristically, they were optimized using videos of the forehead as well as electrocardiography and pulse oximetry signals that were recorded from eight healthy volunteers in and outside the scanner, with and without active radio frequency and gradient coils. Based on the video characteristics, synthetic signals were generated and the "best available" values of an objective function were determined using mathematical optimization. The performance of the proposed method with optimized algorithm parameters was evaluated by applying it to the recorded videos and comparing the computed triggers to those of contact-based methods. Additionally, the method was evaluated by using its triggers for acquiring images from a healthy volunteer and comparing the result to images obtained using pulse oximetry triggering. During evaluation of the videos recorded inside the bore with active radio frequency and gradient coils, the pulse oximeter triggers were labeled in 62.5% as "potentially usable" for cardiac triggering, the electrocardiography

  3. Unsupervised decoding of long-term, naturalistic human neural recordings with automated video and audio annotations

    Directory of Open Access Journals (Sweden)

    Nancy X.R. Wang

    2016-04-01

    Full Text Available Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings.

  4. Composing with Images: A Study of High School Video Producers.

    Science.gov (United States)

    Reilly, Brian

    At Bell High School (Los Angeles, California), students have been using video cameras, computers and editing machines to create videos in a variety of forms and on a variety of topics; in this setting, video is the textual medium of expression. A study was conducted using participant-observation and interviewing over the course of one school year…

  5. Energy efficient image/video data transmission on commercial multi-core processors.

    Science.gov (United States)

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-11-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2~5 without compromising image/video quality.

  6. Digital video image processing from dental operating microscope in endodontic treatment.

    Science.gov (United States)

    Suehara, Masataka; Nakagawa, Kan-Ichi; Aida, Natsuko; Ushikubo, Toshihiro; Morinaga, Kazuki

    2012-01-01

    Recently, optical microscopes have been used in endodontic treatment, as they offer advantages in terms of magnification, illumination, and documentation. Documentation is particularly important in presenting images to patients, and can take the form of both still images and motion video. Although high-quality still images can be obtained using a 35-mm film or CCD camera, the quality of still images produced by a video camera is significantly lower. The purpose of this study was to determine the potential of RegiStax in obtaining high-quality still images from a continuous video stream from an optical microscope. Video was captured continuously and sections with the highest luminosity chosen for frame alignment and stacking using the RegiStax program. The resulting stacked images were subjected to wavelet transformation. The results indicate that high-quality images with a large depth of field could be obtained using this method.

  7. "How To" Videos Improve Residents Performance of Essential Perioperative Electronic Medical Records and Clinical Tasks.

    Science.gov (United States)

    Zoghbi, Veronica; Caskey, Robert C; Dumon, Kristoffel R; Soegaard Ballester, Jacqueline M; Brooks, Ari D; Morris, Jon B; Dempsey, Daniel T

    2017-08-08

    The ability to use electronic medical records (EMR) is an essential skill for surgical residents. However, frustration and anxiety surrounding EMR tasks may detract from clinical performance. We created a series of brief, 1-3 minutes "how to" videos demonstrating 7 key perioperative EMR tasks: booking OR cases, placing preprocedure orders, ordering negative-pressure wound dressing supplies, updating day-of-surgery history and physical notes, writing brief operative notes, discharging patients from the postanesthesia care unit, and checking vital signs. Additionally, we used "Cutting Insights"-a locally developed responsive mobile application for surgical trainee education-as a platform for providing interns with easy access to these videos. We hypothesized that exposure to these videos would lead to increased resident efficiency and confidence in performing essential perioperative tasks, ultimately leading to improved clinical performance. Eleven surgery interns participated in this initiative. Before watching the "how to" videos, each intern was timed performing the aforementioned 7 key perioperative EMR tasks. They also underwent a simulated perioperative emergency requiring the performance of 3 of these EMR tasks in conjunction with 5 other required interventions (including notifying the chief resident, the anesthesia team, and the OR coordinator; and ordering fluid boluses, appropriate laboratories, and blood products). These simulations were scored on a scale from 0 to 8. The interns were then directed to watch the videos. Two days later, their times for performing the 7 tasks and their scores for a similar perioperative emergency simulation were once again recorded. Before and after watching the videos, participants were surveyed to assess their confidence in performing each EMR task using a 5-point Likert scale. We also elicited their opinions of the videos and web-based mobile application using a 5-point scale. Statistical analyses to assess for

  8. Observing the Testing Effect using Coursera Video-recorded Lectures: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Paul Zhihao eYONG

    2016-01-01

    Full Text Available We investigated the testing effect in Coursera video-based learning. One hundred and twenty-three participants either (a studied an instructional video-recorded lecture four times, (b studied the lecture three times and took one recall test, or (c studied the lecture once and took three tests. They then took a final recall test, either immediately or a week later, through which their learning was assessed. Whereas repeated studying produced better recall performance than did repeated testing when the final test was administered immediately, testing produced better performance when the final test was delayed until a week after. The testing effect was observed using Coursera lectures. Future directions are documented.

  9. Clinicians can accurately assign Apgar scores to video recordings of simulated neonatal resuscitations.

    Science.gov (United States)

    Nadler, Izhak; Liley, Helen G; Sanderson, Penelope M

    2010-08-01

    The Apgar score is used to describe the clinical condition of newborns. However, clinicians show low reliability when assigning Apgar scores to video recordings of actual neonatal resuscitations. Simulators provide a controlled environment for recreating and recording resuscitations. Clinicians assigned Apgar scores to such recordings to test the representativeness of simulator and recordings. Study design was guided by Brunswik's probabilistic functionalism. Judgment analysis methods were used to design 51 recordings of neonatal resuscitation scenarios, simulated with SimNewB (Laerdal, Stavanger, Norway). A step-by-step explanation of the design, preparation, and testing of the recordings is provided. Recorded Apgar scores, calculated from the presentation of clinical signs, were compared against the designed scores. Working independently and without feedback, three experts assigned Apgar scores to confirm that the recordings could be interpreted as intended. Seventeen neonatal resuscitation clinicians scored the recordings in a separate experiment. Correlations between Apgar scores assigned by the 20 viewers (experts plus clinicians) and recorded Apgar scores were high (0.78-0.91) and significant (P < 0.01). Fourteen of the 20 viewers scored the recordings without significant bias. Correlations between viewers' scores and scores of individualized linear models calculated for each viewer were high (0.79-0.97) and significant (P < 0.01), indicating systematic judgments. SimNewB provided a realistic presentation of clinical conditions that was preserved in the recordings. Clinicians could interpret clinical conditions systematically and accurately without feedback or detailed instructions. These methods are applicable to future research about accuracy of clinical assessments in actual and simulated environments.

  10. SIGMATA: Storage Integrity Guaranteeing Mechanism against Tampering Attempts for Video Event Data Recorders

    Directory of Open Access Journals (Sweden)

    Hyuckmin Kwon

    2016-04-01

    Full Text Available The usage and market size of video event data recorders (VEDRs, also known as car black boxes, are rapidly increasing. Since VEDRs can provide more visual information about car accident situations than any other device that is currently used for accident investigations (e.g., closed-circuit television, the integrity of the VEDR contents is important to any meaningful investigation. Researchers have focused on the file system integrity or photographic approaches to integrity verification. However, unlike other general data, the video data in VEDRs exhibit a unique I/O behavior in that the videos are stored chronologically. In addition, the owners of VEDRs can manipulate unfavorable scenes after accidents to conceal their recorded behavior. Since prior arts do not consider the time relationship between the frames and fail to discover frame-wise forgery, a more detailed integrity assurance is required. In this paper, we focus on the development of a frame-wise forgery detection mechanism that resolves the limitations of previous mechanisms. We introduce SIGMATA, a novel storage integrity guaranteeing mechanism against tampering attempts for VEDRs. We describe its operation, demonstrate its effectiveness for detecting possible frame-wise forgery, and compare it with existing mechanisms. The result shows that the existing mechanisms fail to detect any frame-wise forgery, while our mechanism thoroughly detects every frame-wise forgery. We also evaluate its computational overhead using real VEDR videos. The results show that SIGMATA indeed discovers frame-wise forgery attacks effectively and efficiently, with the encoding overhead less than 1.5 milliseconds per frame.

  11. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  12. An infrared high rate video imager for various space applications

    Science.gov (United States)

    Svedhem, Hâkan; Koschny, Detlef

    2010-05-01

    Modern spacecraft with high data transmission capabilities have opened up the possibility to fly video rate imagers in space. Several fields concerned with observations of transient phenomena can benefit significantly from imaging at video frame rate. Some applications are observations and characterization of bolides/meteors, sprites, lightning, volcanic eruptions, and impacts on airless bodies. Applications can be found both on low and high Earth orbiting spacecraft as well as on planetary and lunar orbiters. The optimum wavelength range varies depending on the application but we will focus here on the near infrared, partly since it allows exploration of a new field and partly because it, in many cases, allows operation both during day and night. Such an instrument has to our knowledge never flown in space so far. The only sensors of a similar kind fly on US defense satellites for monitoring launches of ballistic missiles. The data from these sensors, however, is largely inaccessible to scientists. We have developed a bread-board version of such an instrument, the SPOSH-IR. The instrument is based on an earlier technology development - SPOSH - a Smart Panoramic Optical Sensor Head, for operation in the visible range, but with the sensor replace by a cooled IR detector and new optics. The instrument is using a Sofradir 320x256 pixel HgCdTe detector array with 30µm pixel size, mounted directly on top of a four stage thermoelectric Peltier cooler. The detector-cooler combination is integrated into an evacuated closed package with a glass window on its front side. The detector has a sensitive range between 0.8 and 2.5 µm. The optical part is a seven lens design with a focal length of 6 mm and a FOV 90deg by 72 deg optimized for use at SWIR. The detector operates at 200K while the optics operates at ambient temperature. The optics and electronics for the bread-board has been designed and built by Jena-Optronik, Jena, Germany. This talk will present the design and the

  13. The effects of frame-rate and image quality on perceived video quality in videoconferencing

    OpenAIRE

    Thakur, Aruna; Gao, Chaunsi; Larsson, Andreas; Parnes, Peter

    2001-01-01

    This report discusses the effect of frame-rate and image quality on the perceived video quality in a specific videoconferencing application (MarratechPro). Subjects with various videoconferencing experiences took part in four experiments wherein they gave their opinions on the quality of video upon the variations in frame-rate and image quality. The results of the experiments showed that the subjects preferred high frame rate over high image quality, under the condition of limited bandwidth. ...

  14. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method....... Quality assessment of ABR videos is a hard problem, but our initial results are promising. We obtain a Spearman rank order correlation of 0.88 using content-independent cross-validation....

  15. Use of selfie sticks and iPhones to record operative photos and videos in plastic surgery.

    Science.gov (United States)

    Chandrappa, Ashok Basur; Nagaraj, Pradeep Kumar; Vasudevan, Srikanth; Nagaraj, Anantheswar Yelampalli; Jagadish, Krithika; Shah, Ankit

    2017-01-01

    Use of smartphone has become ubiquitous. With smartphone cameras becoming powerful, they are replacing digital cameras and digital SLRs as primary instruments to take photos and record videos. It is natural even for plastic surgeons that smartphones are handy to take still photographs and even record high-definition or 4K videos. Another invention which has become popular with smartphone photography is a selfie stick. We explain the possibility and methodology of using an iPhone and selfie stick to take operative photographs and high-quality videos.

  16. Use of selfie sticks and iPhones to record operative photos and videos in plastic surgery

    Directory of Open Access Journals (Sweden)

    Ashok Basur Chandrappa

    2017-01-01

    Full Text Available Use of smartphone has become ubiquitous. With smartphone cameras becoming powerful, they are replacing digital cameras and digital SLRs as primary instruments to take photos and record videos. It is natural even for plastic surgeons that smartphones are handy to take still photographs and even record high-definition or 4K videos. Another invention which has become popular with smartphone photography is a selfie stick. We explain the possibility and methodology of using an iPhone and selfie stick to take operative photographs and high-quality videos.

  17. Classifying Normal and Abnormal Status Based on Video Recordings of Epileptic Patients

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-01-01

    Full Text Available Based on video recordings of the movement of the patients with epilepsy, this paper proposed a human action recognition scheme to detect distinct motion patterns and to distinguish the normal status from the abnormal status of epileptic patients. The scheme first extracts local features and holistic features, which are complementary to each other. Afterwards, a support vector machine is applied to classification. Based on the experimental results, this scheme obtains a satisfactory classification result and provides a fundamental analysis towards the human-robot interaction with socially assistive robots in caring the patients with epilepsy (or other patients with brain disorders in order to protect them from injury.

  18. Video Game Preservation in the UK: A Survey of Records Management Practices

    Directory of Open Access Journals (Sweden)

    Alasdair Bachell

    2014-10-01

    Full Text Available Video games are a cultural phenomenon; a medium like no other that has become one of the largest entertainment sectors in the world. While the UK boasts an enviable games development heritage, it risks losing a major part of its cultural output through an inability to preserve the games that are created by the country’s independent games developers. The issues go deeper than bit rot and other problems that affect all digital media; loss of context, copyright and legal issues, and the throwaway culture of the ‘next’ game all hinder the ability of fans and academics to preserve video games and make them accessible in the future. This study looked at the current attitudes towards preservation in the UK’s independent (‘indie’ video games industry by examining current record-keeping practices and analysing the views of games developers. The results show that there is an interest in preserving games, and possibly a desire to do so, but issues of piracy and cost prevent the industry from undertaking preservation work internally, and from allowing others to assume such responsibility. The recommendation made by this paper is not simply for preservation professionals and enthusiasts to collaborate with the industry, but to do so by advocating the commercial benefits that preservation may offer to the industry.

  19. Observations of asexual reproductive strategies in Antarctic hexactinellid sponges from ROV video records

    Science.gov (United States)

    Teixidó, Núria; Gili, Josep-Maria; Uriz, María-J.; Gutt, Julian; Arntz, Wolf E.

    2006-04-01

    Hexactinellid sponges are one of the structuring taxa of benthic communities on the Weddell Sea shelf (Antarctica). However, little is known about their reproduction patterns (larval development, release, settlement, and recruitment), particularly in relation to sexual and asexual processes in sponge populations. Video stations obtained during several expeditions covering a wide depth range and different areas recorded a high frequency of asexual reproductive strategies (ARS) (bipartition and budding) among hexactinellids. Analysis of seabed video strips between 108 and 256 m depth, representing an area of 1400 m 2, showed that about 28% of these sponges exhibited ARS. The Rossella nuda type dominated most of the video stations and exhibited the highest proportion of budding (35%). This proportion increased with the size class. Size class >20 cm exhibited in all the stations a mean value of 8.3±0.7 (SE) for primary and of 2.5±0.2 (SE) for secondary propagules per sponge, respectively. Results from a shallow station (Stn 059, 117 m depth) showed the highest relative abundance of R. nuda type and budding (>20 cm ˜72%, 10-20 cm ˜60%, 5-10 cm ˜12%, and <5 cm ˜3%). A potential influence of iceberg scouring disturbance on the occurrence of budding and number of propagules also was investigated. We conclude that asexual reproduction in hexactinellid sponges may be more frequent than has been thought before and it may greatly influence the genetic structure of populations.

  20. High-Performance Motion Estimation for Image Sensors with Video Compression

    OpenAIRE

    Weizhi Xu; Shouyi Yin; Leibo Liu; Zhiyong Liu; Shaojun Wei

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed...

  1. What do we do with all this video? Better understanding public engagement for image and video annotation

    Science.gov (United States)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  2. 17 CFR 232.304 - Graphic, image, audio and video material.

    Science.gov (United States)

    2010-04-01

    ... delivered to investors and others is deemed part of the electronic filing and subject to the civil liability..., image, audio or video material, they are not subject to the civil liability and anti-fraud provisions of...

  3. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents' Perspectives.

    Science.gov (United States)

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O'Connor, Alexander; Collins, Michael J

    This study examined adolescents' attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one's attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players' attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents' social cognitive judgments.

  4. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents’ Perspectives

    Science.gov (United States)

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O’Connor, Alexander; Collins, Michael J.

    2015-01-01

    This study examined adolescents’ attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one’s attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players’ attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents’ social cognitive judgments. PMID:25729336

  5. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    National Research Council Canada - National Science Library

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial...

  6. Research and implementation of video image acquisition and processing based on Java and JMF

    Science.gov (United States)

    Qin, Jinlei; Li, Zheng; Niu, Yuguang

    2012-01-01

    The article put forward a method which had been used for video image acquisition and processing, and a system based on Java media framework (JMF) had been implemented by it. The method could be achieved not only by B/S mode but also by C/S mode taking advantage of the predominance of the Java language. Some key issues such as locating video data source, playing video, video image acquisition and processing and so on had been expatiated in detail. The operation results of the system show that this method is fully compatible with common video capture device. At the same time the system possesses many excellences as lower cost, more powerful, easier to develop and cross-platform etc. Finally the application prospect of the method which is based on java and JMF is pointed out.

  7. Moving object detection in top-view aerial videos improved by image stacking

    Science.gov (United States)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  8. An Experimental Video Disc for Map and Image Display,

    Science.gov (United States)

    1984-01-01

    a member of the American Y Codes Society of Photogrammetry . D va±1 apnd/or ABSTRACT A cooperative effort between four government recently resulted in...video tapes# to movie film, to transparencies, to paper photographic prints, to paper maps, charts, and documents. Bach of these media has its own space...perspective terrain views, engineering "* drawihgs, harbor charts, ground photographs, slides, movies , video tapes# documents, and organizaticnal logos

  9. Video-rate computational super-resolution and integral imaging at longwave-infrared wavelengths

    OpenAIRE

    Preciado, Miguel A.; Carles, Guillem; Harvey, Andrew R.

    2017-01-01

    We report the first computational super-resolved, multi-camera integral imaging at long-wave infrared (LWIR) wavelengths. A synchronized array of FLIR Lepton cameras was assembled, and computational super-resolution and integral-imaging reconstruction employed to generate video with light-field imaging capabilities, such as 3D imaging and recognition of partially obscured objects, while also providing a four-fold increase in effective pixel count. This approach to high-resolution imaging enab...

  10. Binocular video ophthalmoscope for simultaneous recording of sequences of the human retina to compare dynamic parameters

    Science.gov (United States)

    Tornow, Ralf P.; Milczarek, Aleksandra; Odstrcilik, Jan; Kolar, Radim

    2017-07-01

    A parallel video ophthalmoscope was developed to acquire short video sequences (25 fps, 250 frames) of both eyes simultaneously with exact synchronization. Video sequences were registered off-line to compensate for eye movements. From registered video sequences dynamic parameters like cardiac cycle induced reflection changes and eye movements can be calculated and compared between eyes.

  11. Simultaneous recordings of human microsaccades and drifts with a contemporary video eye tracker and the search coil technique.

    Directory of Open Access Journals (Sweden)

    Michael B McCamy

    Full Text Available Human eyes move continuously, even during visual fixation. These "fixational eye movements" (FEMs include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs.

  12. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  13. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  14. Investigating interactional competence using video recordings in ESL classrooms to enhance communication

    Science.gov (United States)

    Krishnasamy, Hariharan N.

    2016-08-01

    Interactional competence, or knowing and using the appropriate skills for interaction in various communication situations within a given speech community and culture is important in the field of business and professional communication [1], [2]. Similar to many developing countries in the world, Malaysia is a growing economy and undergraduates will have to acquire appropriate communication skills. In this study, two aspects of the interactional communicative competence were investigated, that is the linguistic and paralinguistic behaviors in small group communication as well as conflict management in small group communication. Two groups of student participants were given a problem-solving task based on a letter of complaint. The two groups of students were video recorded during class hours for 40 minutes. The videos and transcription of the group discussions were analyzed to examine the use of language and interaction in small groups. The analysis, findings and interpretations were verified with three lecturers in the field of communication. The results showed that students were able to accomplish the given task using verbal and nonverbal communication. However, participation was unevenly distributed with two students talking for less than a minute. Negotiation was based more on alternative views and consensus was easily achieved. In concluding, suggestions are given on ways to improve English language communication.

  15. Video-EEG recordings in full-term neonates of diabetic mothers: observational study.

    Science.gov (United States)

    Castro Conde, José Ramón; González González, Nieves Luisa; González Barrios, Desiré; González Campo, Candelaria; Suárez Hernández, Yaiza; Sosa Comino, Elena

    2013-11-01

    To determine whether full-term newborn infants of diabetic mothers (IDM) present immature/disorganised EEG patterns in the immediate neonatal period, and whether there was any relationship with maternal glycaemic control. Cohort study with an incidental sample performed in a tertiary hospital neonatal unit. 23 IDM and 22 healthy newborns born between 2010 and 2013. All underwent video-EEG recording lasting >90 min at 48-72 h of life. We analysed the percentage of indeterminate sleep, transient sharp waves per hour and mature-for-gestational age EEG patterns (discontinuity, maximum duration of interburst interval (IBI), asynchrony, asymmetry, δ brushes, encoches frontales and α/θ rolandic activity). The group of IDM was divided into two subgroups according to maternal HbA1c: (1) HbA1c≥6% and (2) HbA1cIDM presented significantly higher percentage of indeterminate sleep (57% vs 25%; pIDM with maternal HbA1c≥6% showed greater percentage of δ brushes in the burst (14% vs 4%; p=0.007). Full-term IDM newborns showed video-EEG features of abnormal development of brain function. Maternal HbA1c levels<6% during pregnancy could minimise the risk of cerebral dysmaturity.

  16. Adaptive sensing and optimal power allocation for wireless video sensors with sigma-delta imager.

    Science.gov (United States)

    Marijan, Malisa; Demirkol, Ilker; Maricić I, Danijel; Sharma, Gaurav; Ignjatovi, Zeljko

    2010-10-01

    We consider optimal power allocation for wireless video sensors (WVSs), including the image sensor subsystem in the system analysis. By assigning a power-rate-distortion (P-R-D) characteristic for the image sensor, we build a comprehensive P-R-D optimization framework for WVSs. For a WVS node operating under a power budget, we propose power allocation among the image sensor, compression, and transmission modules, in order to minimize the distortion of the video reconstructed at the receiver. To demonstrate the proposed optimization method, we establish a P-R-D model for an image sensor based upon a pixel level sigma-delta (Σ∆) image sensor design that allows investigation of the tradeoff between the bit depth of the captured images and spatio-temporal characteristics of the video sequence under the power constraint. The optimization results obtained in this setting confirm that including the image sensor in the system optimization procedure can improve the overall video quality under power constraint and prolong the lifetime of the WVSs. In particular, when the available power budget for a WVS node falls below a threshold, adaptive sensing becomes necessary to ensure that the node communicates useful information about the video content while meeting its power budget.

  17. Simultaneous recordings of ocular microtremor and microsaccades with a piezoelectric sensor and a video-oculography system

    Directory of Open Access Journals (Sweden)

    Michael B. McCamy

    2013-02-01

    Full Text Available Our eyes are in continuous motion. Even when we attempt to fix our gaze, we produce so called “fixational eye movements”, which include microsaccades, drift, and ocular microtremor (OMT. Microsaccades, the largest and fastest type of fixational eye movement, shift the retinal image from several dozen to several hundred photoreceptors and have equivalent physical characteristics to saccades, only on a smaller scale (Martinez-Conde, Otero-Millan & Macknik, 2013. OMT occurs simultaneously with drift and is the smallest of the fixational eye movements (∼1 photoreceptor width, >0.5 arcmin, with dominant frequencies ranging from 70 Hz to 103 Hz (Martinez-Conde, Macknik & Hubel, 2004. Due to OMT’s small amplitude and high frequency, the most accurate and stringent way to record it is the piezoelectric transduction method. Thus, OMT studies are far rarer than those focusing on microsaccades or drift. Here we conducted simultaneous recordings of OMT and microsaccades with a piezoelectric device and a commercial infrared video tracking system. We set out to determine whether OMT could help to restore perceptually faded targets during attempted fixation, and we also wondered whether the piezoelectric sensor could affect the characteristics of microsaccades. Our results showed that microsaccades, but not OMT, counteracted perceptual fading. We moreover found that the piezoelectric sensor affected microsaccades in a complex way, and that the oculomotor system adjusted to the stress brought on by the sensor by adjusting the magnitudes of microsaccades.

  18. Fractal measures of video-recorded trajectories can classify motor subtypes in Parkinson's Disease

    Science.gov (United States)

    Figueiredo, Thiago C.; Vivas, Jamile; Peña, Norberto; Miranda, José G. V.

    2016-11-01

    Parkinson's Disease is one of the most prevalent neurodegenerative diseases in the world and affects millions of individuals worldwide. The clinical criteria for classification of motor subtypes in Parkinson's Disease are subjective and may be misleading when symptoms are not clearly identifiable. A video recording protocol was used to measure hand tremor of 14 individuals with Parkinson's Disease and 7 healthy subjects. A method for motor subtype classification was proposed based on the spectral distribution of the movement and compared with the existing clinical criteria. Box-counting dimension and Hurst Exponent calculated from the trajectories were used as the relevant measures for the statistical tests. The classification based on the power-spectrum is shown to be well suited to separate patients with and without tremor from healthy subjects and could provide clinicians with a tool to aid in the diagnosis of patients in an early stage of the disease.

  19. From computer images to video presentation: Enhancing technology transfer

    Science.gov (United States)

    Beam, Sherilee F.

    1994-01-01

    With NASA placing increased emphasis on transferring technology to outside industry, NASA researchers need to evaluate many aspects of their efforts in this regard. Often it may seem like too much self-promotion to many researchers. However, industry's use of video presentations in sales, advertising, public relations and training should be considered. Today, the most typical presentation at NASA is through the use of vu-graphs (overhead transparencies) which can be effective for text or static presentations. For full blown color and sound presentations, however, the best method is videotape. In fact, it is frequently more convenient due to its portability and the availability of viewing equipment. This talk describes techniques for creating a video presentation through the use of a combined researcher and video professional team.

  20. [A new laser scan system for video ophthalmoscopy. Initial clinical experiences also in relation to digital image processing].

    Science.gov (United States)

    Fabian, E; Mertz, M; Hofmann, H; Wertheimer, R; Foos, C

    1990-06-01

    The clinical advantages of a scanning laser ophthalmoscope (SLO) and video imaging of fundus pictures are described. Image quality (contrast, depth of field) and imaging possibilities (confocal stop) are assessed. Imaging with different lasers (argon, He-Ne) and changes in imaging rendered possible by confocal alignment of the imaging optics are discussed. Hard copies from video images are still of inferior quality compared to fundus photographs. Methods of direct processing and retrieval of digitally stored SLO video fundus images are illustrated by examples. Modifications for a definitive laser scanning system - in regard to the field of view and the quality of hard copies - are proposed.

  1. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  2. Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.

    Science.gov (United States)

    Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou

    2017-05-10

    Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.

  3. Facial attractiveness ratings from video-clips and static images tell the same story.

    Science.gov (United States)

    Rhodes, Gillian; Lie, Hanne C; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W

    2011-01-01

    Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness.

  4. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    Science.gov (United States)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  5. MARINER 10 IMAGING ARCHIVE EXPERIMENT DATA RECORD

    Data.gov (United States)

    National Aeronautics and Space Administration — This series of fifteen CDs was produced by JPL's Science Digital Data Preservation Task (SDDPT) by migrating the original Mariner Ten image EDRs from old,...

  6. Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.

    Science.gov (United States)

    Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan

    2017-12-19

    In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights in how to construct such a device using off-the-shelf components. © 2017. Published by The Company of Biologists Ltd.

  7. Thinking Images: Doing Philosophy in Film and Video

    Science.gov (United States)

    Parkes, Graham

    2009-01-01

    Over the past several decades film and video have been steadily infiltrating the philosophy curriculum at colleges and universities. Traditionally, teachers of philosophy have not made much use of "audiovisual aids" in the classroom beyond the chalk board or overhead projector, with only the more adventurous playing audiotapes, for example, or…

  8. Video surveillance of epilepsy patients using color image processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Vilic, Adnan

    2014-01-01

    This paper introduces a method for tracking patients under video surveillance based on a color marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lighting issues and other mov...

  9. Nearshore subtidal bathymetry from time-exposure video images

    NARCIS (Netherlands)

    Aarninkhof, S.G.J.; Ruessink, B.G.; Roelvink, J.A.

    2005-01-01

    Time-averaged (over many wave periods) nearshore video observations show the process of wave breaking as one or more white alongshore bands of high intensity. Across a known depth profile, similar bands of dissipation can be predicted with a model describing the time-averaged cross-shore evolution

  10. Video Surveillance of Epilepsy Patients using Color Image Processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Alving, Jørgen

    2007-01-01

    This report introduces a method for tracking of patients under video surveillance based on a marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lightning issues and other moving...

  11. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    Science.gov (United States)

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  12. Spectral optical coherence tomography in video-rate and 3D imaging of contact lens wear.

    Science.gov (United States)

    Kaluzny, Bartlomiej J; Fojt, Wojciech; Szkulmowska, Anna; Bajraszewski, Tomasz; Wojtkowski, Maciej; Kowalczyk, Andrzej

    2007-12-01

    To present the applicability of spectral optical coherence tomography (SOCT) for video-rate and three-dimensional imaging of a contact lens on the eye surface. The SOCT prototype instrument constructed at Nicolaus Copernicus University (Torun, Poland) is based on Fourier domain detection, which enables high sensitivity (96 dB) and increases the speed of imaging 60 times compared with conventional optical coherence tomography techniques. Consequently, video-rate imaging and three-dimensional reconstructions can be achieved, preserving the high quality of the image. The instrument operates under clinical conditions in the Ophthalmology Department (Collegium Medicum Nicolaus Copernicus University, Bydgoszcz, Poland). A total of three eyes fitted with different contact lenses were examined with the aid of the instrument. Before SOCT measurements, slit lamp examinations were performed. Data, which are representative for each imaging mode, are presented. The instrument provided high-resolution (4 microm axial x 10 microm transverse) tomograms with an acquisition time of 40 micros per A-scan. Video-rate imaging allowed the simultaneous quantitative evaluation of the movement of the contact lens and assessment of the fitting relationship between the lens and the ocular surface. Three-dimensional scanning protocols further improved lens visualization and fit evaluation. SOCT allows video-rate and three-dimensional cross-sectional imaging of the eye fitted with a contact lens. The analysis of both imaging modes suggests the future applicability of this technology to the contact lens field.

  13. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    Science.gov (United States)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  14. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    Science.gov (United States)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  15. Image denoising method based on FPGA in digital video transmission

    Science.gov (United States)

    Xiahou, Yaotao; Wang, Wanping; Huang, Tao

    2017-11-01

    In the image acquisition and transmission link, due to the acquisition of equipment and methods, the image would suffer some different degree of interference ,and the interference will reduce the quality of image which would influence the subsequent processing. Therefore, the image filtering and image enhancement are particularly important.The traditional image denoising algorithm smoothes the image while removing the noise, so that the details of the image are lost. In order to improve image quality and save image detail, this paper proposes an improved filtering algorithm based on edge detection, Gaussian filter and median filter. This method can not only reduce the noise effectively, but also the image details are saved relatively well, and the FPGA implementation scheme of this filter algorithm is also given in this paper.

  16. On-Board Video Recording Unravels Bird Behavior and Mortality Produced by High-Speed Trains

    Directory of Open Access Journals (Sweden)

    Eladio L. García de la Morena

    2017-10-01

    Full Text Available Large high-speed railway (HSR networks are planned for the near future to accomplish increased transport demand with low energy consumption. However, high-speed trains produce unknown avian mortality due to birds using the railway and being unable to avoid approaching trains. Safety and logistic difficulties have precluded until now mortality estimation in railways through carcass removal, but information technologies can overcome such problems. We present the results obtained with an experimental on-board system to record bird-train collisions composed by a frontal recording camera, a GPS navigation system and a data storage unit. An observer standing in the cabin behind the driver controlled the system and filled out a form with data of collisions and bird observations in front of the train. Photographs of the train front taken before and after each journey were used to improve the record of killed birds. Trains running the 321.7 km line between Madrid and Albacete (Spain at speeds up to 250–300 km/h were equipped with the system during 66 journeys along a year, totaling approximately 14,700 km of effective recording. The review of videos produced 1,090 bird observations, 29.4% of them corresponding to birds crossing the infrastructure under the catenary and thus facing collision risk. Recordings also showed that 37.7% bird crossings were of animals resting on some element of the infrastructure moments before the train arrival, and that the flight initiation distance of birds (mean ± SD was between 60 ± 33 m (passerines and 136 ± 49 m (raptors. Mortality in the railway was estimated to be 60.5 birds/km year on a line section with 53 runs per day and 26.1 birds/km year in a section with 25 runs per day. Our results are the first published estimation of bird mortality in a HSR and show the potential of information technologies to yield useful data for monitoring the impact of trains on birds via on-board recording systems. Moreover

  17. The Impact of Video Compression on Remote Cardiac Pulse Measurement Using Imaging Photoplethysmography

    Science.gov (United States)

    2017-05-30

    quality is human subjective perception assessed by a Mean Opinion Score (MOS). Alternatively, video quality may be assessed using one of numerous...cameras. Synchronization of the image capture from the array was achieved using a PCIe-6323 data acquisition card (National Instruments, Austin...large reductions of either video resolution or frame rate did not strongly impact iPPG pulse rate measurements [9]. A balanced approach may yield

  18. Computer Vision Tools for Finding Images and Video Sequences.

    Science.gov (United States)

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  19. Energy use of televisions and video cassette recorders in the U.S.

    Energy Technology Data Exchange (ETDEWEB)

    Meier, Alan; Rosen, Karen

    1999-03-01

    In an effort to more accurately determine nationwide energy consumption, the U.S. Department of Energy has recently commissioned studies with the goal of improving its understanding of the energy use of appliances in the miscellaneous end-use category. This study presents an estimate of the residential energy consumption of two of the most common domestic appliances in the miscellaneous end-use category: color televisions (TVs) and video cassette recorders (VCRs). The authors used a bottom-up approach in estimating national TV and VCR energy consumption. First, they obtained estimates of stock and usage from national surveys, while TV and VCR power measurements and other data were recorded at repair and retail shops. Industry-supplied shipment and sales distributions were then used to minimize bias in the power measurement samples. To estimate national TV and VCR energy consumption values, ranges of power draw and mode usage were created to represent situations in homes with more than one unit. Average energy use values for homes with one unit, two units, etc. were calculated and summed to provide estimates of total national TV and VCR energy consumption.

  20. [Video recording and data collection in the operating room: the way to a 'just culture' in the OR].

    Science.gov (United States)

    Schijven, M P; Legemate, D A; Legemaate, J

    2017-01-01

    The Academic Medical Center, Amsterdam, has started a trial to evaluate the usefulness to team debriefings of performance reports generated by a medical data recorder (MDR) in the operating room (OR). Outcome performance reports in structured debriefings in a secure, non-punitive environment are likely to heighten the level of situational awareness of OR teams. This may prevent future error. In addition, the use of video and - even more likely - use of an MDR may contribute to establishing a 'just culture' in the OR. MDRs offer a wealth of data, but only if these data are processed well do the resulting outcome reports reveal insights useful for structured debriefings. The implementation of video recordings or MDRs must be preceded by carefully addressing privacy and litigation issues relating to both OR staff and patients. In this article, we address viewpoints and discuss implementation strategy and the legal considerations involved in enabling the use of video and data registration in the OR.

  1. Enrichment of words by visual images: books, slides, and videos.

    Science.gov (United States)

    Brozek, J M

    1999-08-01

    This article reviews additions to 3 ways of visually enriching verbal accounts of the history of psychology: illustrated books, slides, and videos. Although each approach has its limitations and its merits, taken together they constitute a significant addition to the printed word. As such, they broaden the toolkits of both the learners and the teachers of the history of psychology. Reference is also made to 3 earlier publications.

  2. Practitioner Action Research on Writing Center Tutor Training: Critical Discourse Analysis of Reflections on Video-Recorded Sessions

    Science.gov (United States)

    Pigliacelli, Mary

    2017-01-01

    Training writing center tutors to work collaboratively with students on their writing is a complex and challenging process. This practitioner action research uses critical discourse analysis (Gee, 2014a) to interrogate tutors' understandings of their work, as expressed in their written reflections on video-recorded tutoring sessions, to facilitate…

  3. 75 FR 63434 - Availability of Compliance Guide for the Use of Video or Other Electronic Monitoring or Recording...

    Science.gov (United States)

    2010-10-15

    ... Food Safety and Inspection Service Availability of Compliance Guide for the Use of Video or Other Electronic Monitoring or Recording Equipment in Federally Inspected Establishments AGENCY: Food Safety and... communicated via Listserv, a free electronic mail subscription service for industry, trade groups, consumer...

  4. Assessing the Content of YouTube Videos in Educating Patients Regarding Common Imaging Examinations.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Won, Eugene; Doshi, Ankur M

    2016-12-01

    To assess the content of currently available YouTube videos seeking to educate patients regarding commonly performed imaging examinations. After initial testing of possible search terms, the first two pages of YouTube search results for "CT scan," "MRI," "ultrasound patient," "PET scan," and "mammogram" were reviewed to identify educational patient videos created by health organizations. Sixty-three included videos were viewed and assessed for a range of features. Average views per video were highest for MRI (293,362) and mammography (151,664). Twenty-seven percent of videos used a nontraditional format (eg, animation, song, humor). All videos (100.0%) depicted a patient undergoing the examination, 84.1% a technologist, and 20.6% a radiologist; 69.8% mentioned examination lengths, 65.1% potential pain/discomfort, 41.3% potential radiation, 36.5% a radiology report/results, 27.0% the radiologist's role in interpretation, and 13.3% laboratory work. For CT, 68.8% mentioned intravenous contrast and 37.5% mentioned contrast safety. For MRI, 93.8% mentioned claustrophobia, 87.5% noise, 75.0% need to sit still, 68.8% metal safety, 50.0% intravenous contrast, and 0.0% contrast safety. For ultrasound, 85.7% mentioned use of gel. For PET, 92.3% mentioned radiotracer injection, 61.5% fasting, and 46.2% diabetic precautions. For mammography, unrobing, avoiding deodorant, and possible additional images were all mentioned by 63.6%; dense breasts were mentioned by 0.0%. Educational patient videos on YouTube regarding common imaging examinations received high public interest and may provide a valuable patient resource. Videos most consistently provided information detailing the examination experience and less consistently provided safety information or described the presence and role of the radiologist. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  5. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  6. Video image processing to create a speed sensor

    Science.gov (United States)

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  7. PIZZARO: Forensic analysis and restoration of image and video data

    Czech Academy of Sciences Publication Activity Database

    Kamenický, Jan; Bartoš, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozámský, Adam; Saic, Stanislav; Šroubek, Filip; Šorel, Michal; Zita, Aleš; Zitová, Barbara; Šíma, Z.; Švarc, P.; Hořínek, J.

    2016-01-01

    Roč. 264, č. 1 (2016), s. 153-166 ISSN 0379-0738 R&D Projects: GA MV VG20102013064; GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Image forensic analysis * Image restoration * Image tampering detection * Image source identification Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/kamenicky-0459504.pdf

  8. Development of fast video recording of plasma interaction with a lithium limiter on T-11M tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Lazarev, V.B., E-mail: v_lazarev@triniti.ru [SSC RF TRINITI Troitsk, Moscow (Russian Federation); Dzhurik, A.S.; Shcherbak, A.N. [SSC RF TRINITI Troitsk, Moscow (Russian Federation); Belov, A.M. [NRC “Kurchatov Institute”, Moscow (Russian Federation)

    2016-11-15

    Highlights: • The paper presents the results of the study of tokamak plasma interaction with lithium capillary-porous system limiters and PFC by high-speed color camera. • Registration of emission near the target in SOL in neutral lithium light and e-folding length for neutral Lithium measurements. • Registration of effect of MHD instabilities on CPS Lithium limiter. • A sequence of frames shows evolution of lithium bubble on the surface of lithium limiter. • View of filament structure near the plasma edge in ohmic mode. - Abstract: A new high-speed color camera with interference filters was installed for fast video recording of plasma-surface interaction with a Lithium limiter on the base of capillary-porous system (CPS) in T-11M tokamak vessel. The paper presents the results of the study of tokamak plasma interaction (frame exposure time up to 4 μs) with CPS Lithium limiter in a stable stationary phase, unstable regimes with internal disruption and results of processing of the image of the light emission around the probe, i.e. e-folding length for neutral Lithium penetration and e-folding length for Lithium ion flux in SOL region.

  9. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    Science.gov (United States)

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  10. Word2VisualVec: Image and Video to Sentence Matching by Visual Feature Prediction

    OpenAIRE

    Dong, Jianfeng; Li, Xirong; Snoek, Cees G. M.

    2016-01-01

    This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image / video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence...

  11. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    The ldquoatmosphere-space interactions monitorrdquo (ASIM) is a payload to be mounted on one of the external platforms of the Columbus module of the International Space Station (ISS). The instruments include six video cameras, six photometers and one X-ray detector. The main scientific objective...... of the mission is to study transient luminous events (TLE) above severe thunderstorms: the sprites, jets and elves. Other atmospheric phenomena are also studied including aurora, gravity waves and meteors. As part of the ASIM Phase B study, on-board processing of data from the cameras is being developed...

  12. Application of video recording technology to improve husbandry and reproduction in the carmine bee-eater (Merops n. nubicus).

    Science.gov (United States)

    Ferrie, Gina M; Sky, Christy; Schutz, Paul J; Quinones, Glorieli; Breeding, Shawnlei; Plasse, Chelle; Leighty, Katherine A; Bettinger, Tammie L

    2016-01-01

    Incorporating technology with research is becoming increasingly important to enhance animal welfare in zoological settings. Video technology is used in the management of avian populations to facilitate efficient information collection on aspects of avian reproduction that are impractical or impossible to obtain through direct observation. Disney's Animal Kingdom(®) maintains a successful breeding colony of Northern carmine bee-eaters. This African species is a cavity nester, making their nesting behavior difficult to study and manage in an ex situ setting. After initial research focused on developing a suitable nesting environment, our goal was to continue developing methods to improve reproductive success and increase likelihood of chicks fledging. We installed infrared bullet cameras in five nest boxes and connected them to a digital video recording system, with data recorded continuously through the breeding season. We then scored and summarized nesting behaviors. Using remote video methods of observation provided much insight into the behavior of the birds in the colony's nest boxes. We observed aggression between birds during the egg-laying period, and therefore immediately removed all of the eggs for artificial incubation which completely eliminated egg breakage. We also used observations of adult feeding behavior to refine chick hand-rearing diet and practices. Although many video recording configurations have been summarized and evaluated in various reviews, we found success with the digital video recorder and infrared cameras described here. Applying emerging technologies to cavity nesting avian species is a necessary addition to improving management in and sustainability of zoo avian populations. © 2015 Wiley Periodicals, Inc.

  13. The Moving Image in Education Research: Reassembling the Body in Classroom Video Data

    Science.gov (United States)

    de Freitas, Elizabeth

    2016-01-01

    While audio recordings and observation might have dominated past decades of classroom research, video data is now the dominant form of data in the field. Ubiquitous videography is standard practice today in archiving the body of both the teacher and the student, and vast amounts of classroom and experiment clips are stored in online archives. Yet…

  14. Video monitoring in the Gadria debris flow catchment: preliminary results of large scale particle image velocimetry (LSPIV)

    Science.gov (United States)

    Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo

    2015-04-01

    Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image

  15. Video and image retrieval beyond the cognitive level: the needs and possibilities

    Science.gov (United States)

    Hanjalic, Alan

    2001-01-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  16. Fusion of intraoperative cone-beam CT and endoscopic video for image-guided procedures

    Science.gov (United States)

    Daly, M. J.; Chan, H.; Prisman, E.; Vescan, A.; Nithiananthan, S.; Qiu, J.; Weersink, R.; Irish, J. C.; Siewerdsen, J. H.

    2010-02-01

    Methods for accurate registration and fusion of intraoperative cone-beam CT (CBCT) with endoscopic video have been developed and integrated into a system for surgical guidance that accounts for intraoperative anatomical deformation and tissue excision. The system is based on a prototype mobile C-Arm for intraoperative CBCT that provides low-dose 3D image updates on demand with sub-mm spatial resolution and soft-tissue visibility, and also incorporates subsystems for real-time tracking and navigation, video endoscopy, deformable image registration of preoperative images and surgical plans, and 3D visualization software. The position and pose of the endoscope are geometrically registered to 3D CBCT images by way of real-time optical tracking (NDI Polaris) for rigid endoscopes (e.g., head and neck surgery), and electromagnetic tracking (NDI Aurora) for flexible endoscopes (e.g., bronchoscopes, colonoscopes). The intrinsic (focal length, principal point, non-linear distortion) and extrinsic (translation, rotation) parameters of the endoscopic camera are calibrated from images of a planar calibration checkerboard (2.5×2.5 mm2 squares) obtained at different perspectives. Video-CBCT registration enables a variety of 3D visualization options (e.g., oblique CBCT slices at the endoscope tip, augmentation of video with CBCT images and planning data, virtual reality representations of CBCT [surface renderings]), which can reveal anatomical structures not directly visible in the endoscopic view - e.g., critical structures obscured by blood or behind the visible anatomical surface. Video-CBCT fusion is evaluated in pre-clinical sinus and skull base surgical experiments, and is currently being incorporated into an ongoing prospective clinical trial in CBCT-guided head and neck surgery.

  17. The ImageNet Shuffle: Reorganized Pre-training for Video Event Detection

    NARCIS (Netherlands)

    Mettes, P.; Koelma, D.C.; Snoek, C.G.M.

    2016-01-01

    This paper strives for video event detection using a representation learned from deep convolutional neural networks. Different from the leading approaches, who all learn from the 1,000 classes defined in the ImageNet Large Scale Visual Recognition Challenge, we investigate how to leverage the

  18. Video-rate two-photon excited fluorescence lifetime imaging system with interleaved digitization.

    Science.gov (United States)

    Dow, Ximeng Y; Sullivan, Shane Z; Muir, Ryan D; Simpson, Garth J

    2015-07-15

    A fast (up to video rate) two-photon excited fluorescence lifetime imaging system based on interleaved digitization is demonstrated. The system is compatible with existing beam-scanning microscopes with minor electronics and software modification. Proof-of-concept demonstrations were performed using laser dyes and biological tissue.

  19. Geometric Distortion in Image and Video Watermarking. Robustness and Perceptual Quality Impact

    NARCIS (Netherlands)

    Setyawan, I.

    2004-01-01

    The main focus of this thesis is the problem of geometric distortion in image and video watermarking. In this thesis we discuss the two aspects of the geometric distortion problem, namely the watermark desynchronization aspect and the perceptual quality assessment aspect. Furthermore, this thesis

  20. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  1. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Science.gov (United States)

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  2. System and method for image registration of multiple video streams

    Energy Technology Data Exchange (ETDEWEB)

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  3. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    Science.gov (United States)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  4. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    Science.gov (United States)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  5. Operational prediction of rip currents using numerical model and nearshore bathymetry from video images

    Science.gov (United States)

    Sembiring, L.; Van Ormondt, M.; Van Dongeren, A. R.; Roelvink, J. A.

    2017-07-01

    Rip currents are one of the most dangerous coastal hazards for swimmers. In order to minimize the risk, a coastal operational-process based-model system can be utilized in order to provide forecast of nearshore waves and currents that may endanger beach goers. In this paper, an operational model for rip current prediction by utilizing nearshore bathymetry obtained from video image technique is demonstrated. For the nearshore scale model, XBeach1 is used with which tidal currents, wave induced currents (including the effect of the wave groups) can be simulated simultaneously. Up-to-date bathymetry will be obtained using video images technique, cBathy 2. The system will be tested for the Egmond aan Zee beach, located in the northern part of the Dutch coastline. This paper will test the applicability of bathymetry obtained from video technique to be used as input for the numerical modelling system by comparing simulation results using surveyed bathymetry and model results using video bathymetry. Results show that the video technique is able to produce bathymetry converging towards the ground truth observations. This bathymetry validation will be followed by an example of operational forecasting type of simulation on predicting rip currents. Rip currents flow fields simulated over measured and modeled bathymetries are compared in order to assess the performance of the proposed forecast system.

  6. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    Science.gov (United States)

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  7. Using smart phone video to supplement communication of radiology imaging in a neurosurgical unit: technical note.

    Science.gov (United States)

    Shivapathasundram, Ganeshwaran; Heckelmann, Michael; Sheridan, Mark

    2012-04-01

    The use of smart phones within medicine continues to grow at the same rate as mobile phone technology continues to evolve. One use of smart phones within medicine is in the transmission of radiological images to consultant neurosurgeons who are off-site in an emergency setting. In our unit, this has allowed quick, efficient, and safe communication between consultant neurosurgeon and trainees, aiding in rapid patient assessment and management in emergency situations. To describe a new means of smart phone technology use in the neurosurgical setting, where the video application of smart phones allows transfer of a whole series of patient neuroimaging via multimedia messaging service to off-site consultant neurosurgeons. METHOD/TECHNIQUE: Using the video application of smart phones, a 30-second video of an entire series of patient neuroimaging was transmitted to consultant neurosurgeons. With this information, combined with a clinical history, accurate management decisions were made. This technique has been used on a number of emergency situations in our unit to date. Thus far, the imaging received by consultants has been a very useful adjunct to the clinical information provided by the on-site trainee, and has helped expedite management of patients. While the aim should always be for the specialist neurosurgeon to review the imaging in person, in emergency settings, this is not always possible, and we feel that this technique of smart phone video is a very useful means for rapid communication with neurosurgeons.

  8. Determination of exterior parameters for video image sequences from helicopter by block adjustment with combined vertical and oblique images

    Science.gov (United States)

    Zhang, Jianqing; Zhang, Yong; Zhang, Zuxun

    2003-09-01

    Determination of image exterior parameters is a key aspect for the realization of automatic texture mapping of buildings in the reconstruction of real 3D city models. This paper reports about an application of automatic aerial triangulation on a block with three video image sequences, one vertical image sequence to buildings' roofs and two oblique image sequences to buildings' walls. A new process procedure is developed in order to auto matching homologous points between images in oblique and vertical images. Two strategies are tested. One is treating three strips as independent blocks and executing strip block adjustment respectively, the other is creating a block with three strips, using the new image matching procedure to extract large number of tie points and executing block adjustment. The block adjustment results of these two strategies are also compared.

  9. Video-recorded simulated patient interactions: can they help develop clinical and communication skills in today's learning environment?

    Science.gov (United States)

    Seif, Gretchen A; Brown, Debora

    2013-01-01

    It is difficult to provide real-world learning experiences for students to master clinical and communication skills. The purpose of this paper is to describe a novel instructional method using self- and peer-assessment, reflection, and technology to help students develop effective interpersonal and clinical skills. The teaching method is described by the constructivist learning theory and incorporates the use of educational technology. The learning activities were incorporated into the pre-clinical didactic curriculum. The students participated in two video-recording assignments and performed self-assessments on each and had a peer-assessment on the second video-recording. The learning activity was evaluated through the self- and peer-assessments and an instructor-designed survey. This evaluation identified several themes related to the assignment, student performance, clinical behaviors and establishing rapport. Overall the students perceived that the learning activities assisted in the development of clinical and communication skills prior to direct patient care. The use of video recordings of a simulated history and examination is a unique learning activity for preclinical PT students in the development of clinical and communication skills.

  10. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  11. Intraoperative stereoscopic 3D video imaging: pushing the boundaries of surgical visualisation and applications for neurosurgical education.

    Science.gov (United States)

    Heath, Michael D; Cohen-Gadol, Aaron A

    2012-10-01

    In the past decades, we have witnessed waves of interest in three-dimensional (3D) stereoscopic imaging. Previously, the complexity associated with 3D technology led to its absence in the operating room. But recently, the public's resurrection of interest in this imaging modality has revived its exploration in surgery. Technological advances have also paved the way for incorporation of 3D stereoscopic imaging in neurosurgical education. Herein, the authors discuss the advantages of intraoperative 3D recording and display for neurosurgical learning and contemplate its future directions based on their experience with 3D technology and a review of the literature. Potential benefits of stereoscopic displays include an enhancement of subjective image quality, proper identification of the structure of interest from surrounding tissues and improved surface detection and depth judgment. Such benefits are critical during the intraoperative decision-making process and proper handling of the lesion (specifically, for surgery on aneurysms and tumours), and should therefore be available to the observers in the operating room and residents in training. Our trainees can relive the intraoperative experience of the primary surgeon by reviewing the recorded stereoscopic 3D videos. Proper 3D knowledge of surgical anatomy is important for operative success. 3D stereoscopic viewing of this anatomy may accelerate the learning curve of trainees and improve the standards of surgical teaching. More objective studies are relevant in further establishing the value of 3D technology in neurosurgical education.

  12. Image manipulation: Fraudulence in digital dental records: Study and review.

    Science.gov (United States)

    Chowdhry, Aman; Sircar, Keya; Popli, Deepika Bablani; Tandon, Ankita

    2014-01-01

    In present-day times, freely available software allows dentists to tweak their digital records as never before. But, there is a fine line between acceptable enhancements and scientific delinquency. To manipulate digital images (used in forensic dentistry) of casts, lip prints, and bite marks in order to highlight tampering techniques and methods of detecting and preventing manipulation of digital images. Digital image records of forensic data (casts, lip prints, and bite marks photographed using Samsung Techwin L77 digital camera) were manipulated using freely available software. Fake digital images can be created either by merging two or more digital images, or by altering an existing image. Retouched digital images can be used for fraudulent purposes in forensic investigations. However, tools are available to detect such digital frauds, which are extremely difficult to assess visually. Thus, all digital content should mandatorily have attached metadata and preferably watermarking in order to avert their malicious re-use. Also, computer alertness, especially about imaging software's, should be promoted among forensic odontologists/dental professionals.

  13. Video outside versus video inside the web: do media setting and image size have an impact on the emotion-evoking potential of video?

    NARCIS (Netherlands)

    Verleur, R.; Verhagen, Pleunes Willem; Crawford, Margaret; Simonson, Michael; Lamboy, Carmen

    2001-01-01

    To explore the educational potential of video-evoked affective responses in a Web-based environment, the question was raised whether video in a Web-based environment is experienced differently from video in a traditional context. An experiment was conducted that studied the affect-evoking power of

  14. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  15. Use of KLV to combine metadata, camera sync, and data acquisition into a single video record

    Science.gov (United States)

    Hightower, Paul

    2015-05-01

    SMPTE has designed in significant data spaces in each frame that may be used to store time stamps and other time sensitive data. There are metadata spaces in both the analog equivalent of the horizontal blanking referred to as the Horizontal Ancillary (HANC) space and in the analog equivalent of the vertical interval blanking lines referred to as the Vertical Ancillary (VANC) space. The HANC space is very crowded with many data types including information about frame rate and format, 16 channels of audio sound bites, copyright controls, billing information and more than 2,000 more elements. The VANC space is relatively unused by cinema and broadcasters which makes it a prime target for use in test, surveillance and other specialized applications. Taking advantage of the SMPTE structures, one can design and implement custom data gathering and recording systems while maintaining full interoperability with standard equipment. The VANC data space can be used to capture image relevant data and can be used to overcome transport latency and diminished image quality introduced by the use of compression.

  16. Ultrafast video imaging of cell division from zebrafish egg using multimodal microscopic system

    Science.gov (United States)

    Lee, Sung-Ho; Jang, Bumjoon; Kim, Dong Hee; Park, Chang Hyun; Bae, Gyuri; Park, Seung Woo; Park, Seung-Han

    2017-07-01

    Unlike those of other ordinary laser scanning microscopies in the past, nonlinear optical laser scanning microscopy (SHG, THG microscopy) applied ultrafast laser technology which has high peak powers with relatively inexpensive, low-average-power. It short pulse nature reduces the ionization damage in organic molecules. And it enables us to take bright label-free images. In this study, we measured cell division of zebrafish egg with ultrafast video images using multimodal nonlinear optical microscope. The result shows in-vivo cell division label-free imaging with sub-cellular resolution.

  17. The challenge associated with the robust computation of meteor velocities from video and photographic records

    Science.gov (United States)

    Egal, A.; Gural, P. S.; Vaubaillon, J.; Colas, F.; Thuillot, W.

    2017-09-01

    The CABERNET project was designed to push the limits for obtaining accurate measurements of meteoroids orbits from photographic and video meteor camera recordings. The discrepancy between the measured and theoretic orbits of these objects heavily depends on the semi-major axis determination, and thus on the reliability of the pre-atmospheric velocity computation. With a spatial resolution of 0.01° per pixel and a temporal resolution of up to 10 ms, CABERNET should be able to provide accurate measurements of velocities and trajectories of meteors. To achieve this, it is necessary to improve the precision of the data reduction processes, and especially the determination of the meteor's velocity. In this work, most of the steps of the velocity computation are thoroughly investigated in order to reduce the uncertainties and error contributions at each stage of the reduction process. The accuracy of the measurement of meteor centroids is established and results in a precision of 0.09 pixels for CABERNET, which corresponds to 3.24‧‧. Several methods to compute the velocity were investigated based on the trajectory determination algorithms described in Ceplecha (1987) and Borovicka (1990), as well as the multi-parameter fitting (MPF) method proposed by Gural (2012). In the case of the MPF, many optimization methods were implemented in order to find the most efficient and robust technique to solve the minimization problem. The entire data reduction process is assessed using simulated meteors, with different geometrical configurations and deceleration behaviors. It is shown that the multi-parameter fitting method proposed by Gural(2012)is the most accurate method to compute the pre-atmospheric velocity in all circumstances. Many techniques that assume constant velocity at the beginning of the path as derived from the trajectory determination using Ceplecha (1987) or Borovicka (1990) can lead to large errors for decelerating meteors. The MPF technique also allows one to

  18. Simulation of the perpendicular recording process including image charge effects

    NARCIS (Netherlands)

    Beusekamp, M.F.; Fluitman, J.H.J.

    1986-01-01

    This paper presents a complete model for the perpendicular recording process in single-pole-head keeper-layer configurations. It includes the influence of the image-charge distributions in the head and the keeper layer. Based on calculations of magnetization distributions in standstill situations,

  19. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD......) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation in the CIE L*a*b* color space. The metric takes luminance reduction, color distortion and loss...... of uniformity in the resulting image in consideration. Subjective evaluations of images generated using different backlight dimming algorithms and clipping strategies show that the proposed metric estimates the perceived image quality more accurately than conventional PSNR....

  20. The effects of video compression on acceptability of images for monitoring life sciences experiments

    Science.gov (United States)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  1. Viral video: Live imaging of virus-host encounters

    Science.gov (United States)

    Son, Kwangmin; Guasto, Jeffrey S.; Cubillos-Ruiz, Andres; Chisholm, Sallie W.; Sullivan, Matthew B.; Stocker, Roman

    2014-11-01

    Viruses are non-motile infectious agents that rely on Brownian motion to encounter and subsequently adsorb to their hosts. Paradoxically, the viral adsorption rate is often reported to be larger than the theoretical limit imposed by the virus-host encounter rate, highlighting a major gap in the experimental quantification of virus-host interactions. Here we present the first direct quantification of the viral adsorption rate, obtained using live imaging of individual host cells and viruses for thousands of encounter events. The host-virus pair consisted of Prochlorococcus MED4, a 800 nm small non-motile bacterium that dominates photosynthesis in the oceans, and its virus PHM-2, a myovirus that has a 80 nm icosahedral capsid and a 200 nm long rigid tail. We simultaneously imaged hosts and viruses moving by Brownian motion using two-channel epifluorescent microscopy in a microfluidic device. This detailed quantification of viral transport yielded a 20-fold smaller adsorption efficiency than previously reported, indicating the need for a major revision in infection models for marine and likely other ecosystems.

  2. Automatic Polyp Detection in Pillcam Colon 2 Capsule Images and Videos: Preliminary Feasibility Report

    Directory of Open Access Journals (Sweden)

    Pedro N. Figueiredo

    2011-01-01

    Full Text Available Background. The aim of this work is to present an automatic colorectal polyp detection scheme for capsule endoscopy. Methods. PillCam COLON2 capsule-based images and videos were used in our study. The database consists of full exam videos from five patients. The algorithm is based on the assumption that the polyps show up as a protrusion in the captured images and is expressed by means of a P-value, defined by geometrical features. Results. Seventeen PillCam COLON2 capsule videos are included, containing frames with polyps, flat lesions, diverticula, bubbles, and trash liquids. Polyps larger than 1 cm express a P-value higher than 2000, and 80% of the polyps show a P-value higher than 500. Diverticula, bubbles, trash liquids, and flat lesions were correctly interpreted by the algorithm as nonprotruding images. Conclusions. These preliminary results suggest that the proposed geometry-based polyp detection scheme works well, not only by allowing the detection of polyps but also by differentiating them from nonprotruding images found in the films.

  3. Mission planning optimization of video satellite for ground multi-object staring imaging

    Science.gov (United States)

    Cui, Kaikai; Xiang, Junhua; Zhang, Yulin

    2018-03-01

    This study investigates the emergency scheduling problem of ground multi-object staring imaging for a single video satellite. In the proposed mission scenario, the ground objects require a specified duration of staring imaging by the video satellite. The planning horizon is not long, i.e., it is usually shorter than one orbit period. A binary decision variable and the imaging order are used as the design variables, and the total observation revenue combined with the influence of the total attitude maneuvering time is regarded as the optimization objective. Based on the constraints of the observation time windows, satellite attitude adjustment time, and satellite maneuverability, a constraint satisfaction mission planning model is established for ground object staring imaging by a single video satellite. Further, a modified ant colony optimization algorithm with tabu lists (Tabu-ACO) is designed to solve this problem. The proposed algorithm can fully exploit the intelligence and local search ability of ACO. Based on full consideration of the mission characteristics, the design of the tabu lists can reduce the search range of ACO and improve the algorithm efficiency significantly. The simulation results show that the proposed algorithm outperforms the conventional algorithm in terms of optimization performance, and it can obtain satisfactory scheduling results for the mission planning problem.

  4. A Study of Behavioral Change in 50 Severely Multi-Sensorily Handicapped Children Through Application of the Video-Tape Recorded Behavioral Evaluation Protocol. Final Report.

    Science.gov (United States)

    Curtis, W. Scott

    Examined with 49 deaf-blind children (under 9 years old) was the use of the Telediagnostic Behavior Evaluation Protocol, a video-tape recorded evaluation protocol. To further develop the Telediagnostic Protocol and delineate the characteristics of the Ss observed during the process of test development, Ss were video-taped in eight 3-minute…

  5. Relating pressure measurements to phenomena observed in high speed video recordings during tests of explosive charges in a semi-confined blast chamber

    CSIR Research Space (South Africa)

    Mostert, FJ

    2012-09-01

    Full Text Available video recordings were obtained from the open end of the chamber of the fireball and post detonative behaviour of explosive products. The framing rate of the video camera was 10 000 fps and the pressure measurements were obtained for at least 10 ms after...

  6. Monitoring the body temperature of cows and calves using video recordings from an infrared thermography camera.

    Science.gov (United States)

    Hoffmann, Gundula; Schmidt, Mariana; Ammon, Christian; Rose-Meierhöfer, Sandra; Burfeind, Onno; Heuwieser, Wolfgang; Berg, Werner

    2013-06-01

    The aim of this study was to assess the variability of temperatures measured by a video-based infrared camera (IRC) in comparison to rectal and vaginal temperatures. The body surface temperatures of cows and calves were measured contactless at different body regions using videos from the IRC. Altogether, 22 cows and 9 calves were examined. The differences of the measured IRC temperatures among the body regions, i.e. eye (mean: 37.0 °C), back of the ear (35.6 °C), shoulder (34.9 °C) and vulva (37.2 °C), were significant (P infrared thermography videos has the advantage to analyze more than 1 picture per animal in a short period of time, and shows potential as a monitoring system for body temperatures in cattle.

  7. Video Object Tracking in Neural Axons with Fluorescence Microscopy Images

    Directory of Open Access Journals (Sweden)

    Liang Yuan

    2014-01-01

    tracking. In this paper, we describe two automated tracking methods for analyzing neurofilament movement based on two different techniques: constrained particle filtering and tracking-by-detection. First, we introduce the constrained particle filtering approach. In this approach, the orientation and position of a particle are constrained by the axon’s shape such that fewer particles are necessary for tracking neurofilament movement than object tracking techniques based on generic particle filtering. Secondly, a tracking-by-detection approach to neurofilament tracking is presented. For this approach, the axon is decomposed into blocks, and the blocks encompassing the moving neurofilaments are detected by graph labeling using Markov random field. Finally, we compare two tracking methods by performing tracking experiments on real time-lapse image sequences of neurofilament movement, and the experimental results show that both methods demonstrate good performance in comparison with the existing approaches, and the tracking accuracy of the tracing-by-detection approach is slightly better between the two.

  8. Does video recording alter the behavior of police during interrogation? A mock crime-and-investigation study.

    Science.gov (United States)

    Kassin, Saul M; Kukucka, Jeff; Lawson, Victoria Z; DeCarlo, John

    2014-02-01

    A field study conducted in a midsized city police department examined whether video recording alters the process of interrogation. Sixty-one investigators inspected a staged crime scene and interrogated a male mock suspect in sessions that were surreptitiously recorded. By random assignment, half the suspects had committed the mock crime; the other half were innocent. Half the police participants were informed that the sessions were being recorded; half were not. Coding of the interrogations revealed the use of several common tactics designed to get suspects to confess. Importantly, police in the camera-informed condition were less likely than those in the -uninformed condition to use minimization tactics and marginally less likely to use maximization tactics. They were also perceived by suspects-who were all uninformed of the camera manipulation-as trying less hard to elicit a confession. Unanticipated results indicated that camera-informed police were better able to discriminate between guilty and innocent suspects in their judgments and behavior. The results as a whole indicate that video recording can affect the process of interrogation-notably, by inhibiting the use of certain tactics. It remains to be seen whether these findings generalize to longer and more consequential sessions and whether the camera-induced differences found are to be judged as favorable or unfavorable.

  9. Body movement analysis during sleep for children with ADHD using video image processing.

    Science.gov (United States)

    Nakatani, Masahiro; Okada, Shima; Shimizu, Sachiko; Mohri, Ikuko; Ohno, Yuko; Taniike, Masako; Makikawa, Masaaki

    2013-01-01

    In recent years, the amount of children with sleep disorders that cause arousal during sleep or light sleep is increasing. Attention-deficit hyperactivity disorder (ADHD) is a cause of this sleep disorder; children with ADHD have frequent body movement during sleep. Therefore, we investigated the body movement during sleep of children with and without ADHD using video imaging. We analysed large gross body movements (GM) that occur and obtained the GM rate and the rest duration. There were differences between the body movements of children with ADHD and normally developed children. The children with ADHD moved frequently, so their rest duration was shorter than that of the normally developed children. Additionally, the rate of gross body movement indicated a significant difference in REM sleep (p video image processing.

  10. 4K Video-Laryngoscopy and Video-Stroboscopy: Preliminary Findings.

    Science.gov (United States)

    Woo, Peak

    2016-01-01

    4K video is a new format. At 3840 × 2160 resolution, it has 4 times the resolution of standard 1080 high definition (HD) video. Magnification can be done without loss of resolution. This study uses 4K video for video-stroboscopy. Forty-six patients were examined by conventional video-stroboscopy (digital 3 chip CCD) and compared with 4K video-stroboscopy. The video was recorded on a Blackmagic 4K cinema production camera in CinemaDNG RAW format. The video was played back on a 4K monitor and compared to standard video. Pathological conditions included: polyps, scar, cysts, cancer, sulcus, and nodules. Successful 4K video recordings were achieved in all subjects using a 70° rigid endoscope. The camera system is bulky. The examination is performed similarly to standard video-stroboscopy. Playback requires a 4K monitor. As expected, the images were far clearer in detail than standard video. Stroboscopy video using the 4K camera was consistently able to show more detail. Two patients had diagnosis change after 4K viewing. 4K video is an exciting new technology that can be applied to laryngoscopy. It allows for cinematic 4K quality recordings. Both continuous and stroboscopic light can be used for visualization. Its clinical utility is feasible, but usefulness must be proven. © The Author(s) 2015.

  11. Video Tape Recording Evaluation Protocol Behavior Rating Form - Part 1: Communication.

    Science.gov (United States)

    Curtis, W. Scott; Donlon, Edward T.

    Presented is the behavior rating scale designed for use with a video tape protocol for examination of multiply handicapped deaf blind children, whose development and evaluation are discussed in EC 040 599. The behavioral rating scale consists of five sections: unstructured orientation of child in examining area, child's task orientation and…

  12. Perspectives on Using Video Recordings in Conversation Analytical Studies on Learning in Interaction

    Science.gov (United States)

    Rusk, Fredrik; Pörn, Michaela; Sahlström, Fritjof; Slotte-Lüttge, Anna

    2015-01-01

    Video is currently used in many studies to document the interaction in conversation analytical (CA) studies on learning. The discussion on the method used in these studies has primarily focused on the analysis or the data construction, whereas the relation between data construction and analysis is rarely brought to attention. The aim of this…

  13. The importance of video editing in automated image analysis in studies of the cerebral cortex.

    Science.gov (United States)

    Terry, R D; Deteresa, R

    1982-03-01

    Editing of the video image in computerized image analysis is readily accomplished with the appropriate apparatus, but slows the assay very significantly. In dealing with the cerebral cortex, however video editing is of considerable importance in that cells are very often contiguous to one another or are partially superimposed, and this gives an erroneous measurement unless those cells are artificially separated. Also important is elimination of vascular cells from consideration by the automated counting apparatus. A third available mode of editing allows the filling-in of the cytoplasm of cell bodies which are not fully stained with sufficient intensity to be wholly detected. This study, which utilizes 23 samples, demonstrates that, in a given area of a histologic section of cerebral cortex, the number of small cells is greater and the number of large neurons is smaller with editing than without. In that not all cases follow this general pattern, inadequate editing may lead to significant errors on individual specimens as well as to the calculated mean. Video editing is therefore an essential part of the morphometric study of cerebral cortex by means of automated image analysis.

  14. Visual Recognition in RGB Images and Videos by Learning from RGB-D Data.

    Science.gov (United States)

    Li, Wen; Chen, Lin; Xu, Dong; Van Gool, Luc

    2017-08-02

    In this work, we propose a new framework for recognizing RGB images or videos by leveraging a set of labeled RGB-D data, in which the depth features can be additionally extracted from the depth images or videos. We formulate this task as a new unsupervised domain adaptation (UDA) problem, in which we aim to take advantage of the additional depth features in the source domain and also cope with the data distribution mismatch between the source and target domains. To handle the domain distribution mismatch, we propose to learn an optimal projection matrix to map the samples from both domains into a common subspace such that the domain distribution mismatch can be reduced. Moreover, we also propose different strategies to effectively utilize the additional depth features. To simultaneously cope with the above two issues, we formulate a unified learning framework called domain adaptation from multi-view to single-view (DAM2S). By defining various forms of regularizers in our DAM2S framework, different strategies can be readily incorporated to learn robust SVM classifiers for classifying the target samples. We conduct comprehensive experiments, which demonstrate the effectiveness of our proposed methods for recognizing RGB images and videos by learning from RGB-D data.

  15. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    Science.gov (United States)

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.

  16. Using image processing technology combined with decision tree algorithm in laryngeal video stroboscope automatic identification of common vocal fold diseases.

    Science.gov (United States)

    Jeffrey Kuo, Chung-Feng; Wang, Po-Chun; Chu, Yueng-Hsiang; Wang, Hsing-Won; Lai, Chun-Yu

    2013-10-01

    This study used the actual laryngeal video stroboscope videos taken by physicians in clinical practice as the samples for experimental analysis. The samples were dynamic vocal fold videos. Image processing technology was used to automatically capture the image of the largest glottal area from the video to obtain the physiological data of the vocal folds. In this study, an automatic vocal fold disease identification system was designed, which can obtain the physiological parameters for normal vocal folds, vocal paralysis and vocal nodules from image processing according to the pathological features. The decision tree algorithm was used as the classifier of the vocal fold diseases. The identification rate was 92.6%, and the identification rate with an image recognition improvement processing procedure after classification can be improved to 98.7%. Hence, the proposed system has value in clinical practices. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from 2000 (NODC Accession 0000728)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2000 at 23 sites, some of which had multiple depths. Estimates of substrate...

  18. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  19. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from year 1999 (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  20. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2000 (NODC Accession 0000728)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2000 at 23 sites, some of which had multiple depths. Estimates of substrate...

  1. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP):Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  2. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  3. Video Transect Images (1999) from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP) (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  4. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  5. Brain Source Imaging in Preclinical Rat Models of Focal Epilepsy using High-Resolution EEG Recordings.

    Science.gov (United States)

    Bae, Jihye; Deshmukh, Abhay; Song, Yinchen; Riera, Jorge

    2015-06-06

    Electroencephalogram (EEG) has been traditionally used to determine which brain regions are the most likely candidates for resection in patients with focal epilepsy. This methodology relies on the assumption that seizures originate from the same regions of the brain from which interictal epileptiform discharges (IEDs) emerge. Preclinical models are very useful to find correlates between IED locations and the actual regions underlying seizure initiation in focal epilepsy. Rats have been commonly used in preclinical studies of epilepsy; hence, there exist a large variety of models for focal epilepsy in this particular species. However, it is challenging to record multichannel EEG and to perform brain source imaging in such a small animal. To overcome this issue, we combine a patented-technology to obtain 32-channel EEG recordings from rodents and an MRI probabilistic atlas for brain anatomical structures in Wistar rats to perform brain source imaging. In this video, we introduce the procedures to acquire multichannel EEG from Wistar rats with focal cortical dysplasia, and describe the steps both to define the volume conductor model from the MRI atlas and to uniquely determine the IEDs. Finally, we validate the whole methodology by obtaining brain source images of IEDs and compare them with those obtained at different time frames during the seizure onset.

  6. Artifact reduction of compressed images and video combining adaptive fuzzy filtering and directional anisotropic diffusion

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren; Korhonen, Jari

    2011-01-01

    and ringing artifacts, we have applied directional anisotropic diffusion. Besides that, the selection of the adaptive threshold parameter for the diffusion coefficient has also improved the performance of the algorithm. Experimental results on JPEG compressed images as well as MJPEG and H.264 compressed......Fuzzy filtering is one of the recently developed methods for reducing distortion in compressed images and video. In this paper, we combine the powerful anisotropic diffusion equations with fuzzy filtering in order to reduce the impact of artifacts. Based on the directional nature of the blocking...

  7. Comparison of ultrasound imaging and video otoscopy with cross-sectional imaging for the diagnosis of canine otitis media.

    Science.gov (United States)

    Classen, J; Bruehschwein, A; Meyer-Lindenberg, A; Mueller, R S

    2016-11-01

    Ultrasound imaging (US) of the tympanic bulla (TB) for diagnosis of canine otitis media (OM) is less expensive and less invasive than cross-sectional imaging techniques including computed tomography (CT) and magnetic resonance imaging (MRI). Video otoscopy (VO) is used to clean inflamed ears. The objective of this study was to investigate the diagnostic value of US and VO in OM using cross-sectional imaging as the reference standard. Client owned dogs with clinical signs of OE and/or OM were recruited for the study. Physical, neurological, otoscopic and otic cytological examinations were performed on each dog and both TB were evaluated using US with an 8 MHz micro convex probe, cross-sectional imaging (CT or MRI) and VO. Of 32 dogs enrolled, 24 had chronic otitis externa (OE; five also had clinical signs of OM), four had acute OE without clinical signs of OM, and four had OM without OE. Ultrasound imaging was positive in three of 14 ears, with OM identified on cross-sectional imaging. One US was false positive. Sensitivity, specificity, positive and negative predictive values and accuracy of US were 21%, 98%, 75%, 81% and 81%, respectively. The corresponding values of VO were 91%, 98%, 91%, 98% and 97%, respectively. Video otoscopy could not identify OM in one case, while in another case, although the tympanum was ruptured, the CT was negative. Ultrasound imaging should not replace cross-sectional imaging for the diagnosis of canine OM, but can be helpful, and VO was much more reliable than US. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  9. High-Performance Motion Estimation for Image Sensors with Video Compression.

    Science.gov (United States)

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-08-21

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  10. A real-time remote video streaming platform for ultrasound imaging.

    Science.gov (United States)

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  11. [Superimpose of images by appending two simple video amplifier circuits to color television (author's transl)].

    Science.gov (United States)

    Kojima, K; Hiraki, T; Koshida, K; Maekawa, R; Hisada, K

    1979-09-15

    Images are very useful to obtain diagnostic informations in medical fields. Also by superimposing two or three images obtained from the same patient, various informations, for example a degree of overlapping and anatomical land mark, which can not be found in only one image, can be often found. In this paper characteristics of our trial color television system for the purpose of superimposing X-ray images and/or radionuclide images are described. This color television system, superimposing two images in each different color consists of two monochromatic vidicon cameras and 20 inches conventional color television in which only two simple video amplifier circuits are added. Signals from vidicon cameras are amplified about 40 dB and are directly applied to cathode terminals of color CRT in the television. This system is very simple and economical color displays, and enhance a degree of overlapping and displacement between images. As one of typical clinical applications, pancreas images were superimposed in color by this method. As a result, size and position of pancreas was enhanced. Also X-ray image and radionuclide image were superimposed to find exactly the position of tumors. Furthermore this system was very useful for color display of multinuclides scintigraphy.

  12. Integration of Video Images and CAD Wireframes for 3d Object Localization

    Science.gov (United States)

    Persad, R. A.; Armenakis, C.; Sohn, G.

    2012-07-01

    The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ) surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT). Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  13. INTEGRATION OF VIDEO IMAGES AND CAD WIREFRAMES FOR 3D OBJECT LOCALIZATION

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2012-07-01

    Full Text Available The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT. Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  14. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol.

    Science.gov (United States)

    Harford, Mirae; Catherall, Jacqueline; Gerry, Stephen; Young, Duncan; Watkinson, Peter

    2017-10-25

    For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. PROSPERO CRD42016029167.

  15. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol

    Directory of Open Access Journals (Sweden)

    Mirae Harford

    2017-10-01

    Full Text Available Abstract Background For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. Methods We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. Discussion To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. Systematic review registration PROSPERO CRD42016029167

  16. A unified framework for capturing facial images in video surveillance systems using cooperative camera system

    Science.gov (United States)

    Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen

    2008-04-01

    Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.

  17. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  18. Surgical video recording with a modified GoPro Hero 4 camera

    Directory of Open Access Journals (Sweden)

    Lin LK

    2016-01-01

    Full Text Available Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method: The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results: Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion: The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. Keywords: teaching, oculoplastic, strabismus

  19. Surgical video recording with a modified GoPro Hero 4 camera.

    Science.gov (United States)

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  20. A Video Rate Confocal Laser Beam Scanning Light Microscope Using An Image Dissector

    Science.gov (United States)

    Goldstein, Seth R.; Hubin, Thomas; Rosenthal, Scott; Washburn, Clayton

    1989-12-01

    A video rate confocal reflected light microscope with no moving parts has been developed. Return light from an acousto-optically raster scanned laser beam is imaged from the microscope stage onto the photocathode of an Image Dissector Tube (IDT). Confocal operation is achieved by appropriately raster scanning with the IDT x and y deflection coils so as to continuously "sample" that portion of the photocathode that is being instantaneously illuminated by the return image of the scanning laser spot. Optimum IDT scan parameters and geometric distortion correction parameters are determined under computer control within seconds and are then continuously applied to insure system alignment. The system is operational and reflected light images from a variety of objects have been obtained. The operating principle can be extended to fluorescence and transmission microscopy.

  1. Venus in motion: An animated video catalog of Pioneer Venus Orbiter Cloud Photopolarimeter images

    Science.gov (United States)

    Limaye, Sanjay S.

    1992-01-01

    Images of Venus acquired by the Pioneer Venus Orbiter Cloud Photopolarimeter (OCPP) during the 1982 opportunity have been utilized to create a short video summary of the data. The raw roll by roll images were first navigated using the spacecraft attitude and orbit information along with the CPP instrument pointing information. The limb darkening introduced by the variation of solar illumination geometry and the viewing angle was then modelled and removed. The images were then projected to simulate a view obtained from a fixed perspective with the observer at 10 Venus radii away and located above a Venus latitude of 30 degrees south and a longitude 60 degrees west. A total of 156 images from the 1982 opportunity have been animated at different dwell rates.

  2. Towards Realising Secure and Efficient Image and Video Processing Applications on Quantum Computers

    Directory of Open Access Journals (Sweden)

    Abdullah M. Iliyasu

    2013-07-01

    Full Text Available Exploiting the promise of security and efficiency that quantum computing offers, the basic foundations leading to commercial applications for quantum image processing are proposed. Two mathematical frameworks and algorithms to accomplish the watermarking of quantum images, authentication of ownership of already watermarked images and recovery of their unmarked versions on quantum computers are proposed. Encoding the images as 2n-sized normalised Flexible Representation of Quantum Images (FRQI states, with n-qubits and 1-qubit dedicated to capturing the respective information about the colour and position of every pixel in the image respectively, the proposed algorithms utilise the flexibility inherent to the FRQI representation, in order to confine the transformations on an image to any predetermined chromatic or spatial (or a combination of both content of the image as dictated by the watermark embedding, authentication or recovery circuits. Furthermore, by adopting an apt generalisation of the criteria required to realise physical quantum computing hardware, three standalone components that make up the framework to prepare, manipulate and recover the various contents required to represent and produce movies on quantum computers are also proposed. Each of the algorithms and the mathematical foundations for their execution were simulated using classical (i.e., conventional or non-quantum computing resources, and their results were analysed alongside other longstanding classical computing equivalents. The work presented here, combined together with the extensions suggested, provide the basic foundations towards effectuating secure and efficient classical-like image and video processing applications on the quantum-computing framework.

  3. Analysis of Decorrelation Transform Gain for Uncoded Wireless Image and Video Communication.

    Science.gov (United States)

    Ruiqin Xiong; Feng Wu; Jizheng Xu; Xiaopeng Fan; Chong Luo; Wen Gao

    2016-04-01

    An uncoded transmission scheme called SoftCast has recently shown great potential for wireless video transmission. Unlike conventional approaches, SoftCast processes input images only by a series of transformations and modulates the coefficients directly to a dense constellation for transmission. The transmission is uncoded and lossy in nature, with its noise level commensurate with the channel condition. This paper presents a theoretical analysis for an uncoded visual communication, focusing on developing a quantitative measurements for the efficiency of decorrelation transform in a generalized uncoded transmission framework. Our analysis reveals that the energy distribution among signal elements is critical for the efficiency of uncoded transmission. A decorrelation transform can potentially bring a significant performance gain by boosting the energy diversity in signal representation. Numerical results on Markov random process and real image and video signals are reported to evaluate the performance gain of using different transforms in uncoded transmission. The analysis presented in this paper is verified by simulated SoftCast transmissions. This provide guidelines for designing efficient uncoded video transmission schemes.

  4. JF-cut: a parallel graph cut approach for large-scale image and video.

    Science.gov (United States)

    Peng, Yi; Chen, Li; Ou-Yang, Fang-Xin; Chen, Wei; Yong, Jun-Hai

    2015-02-01

    Graph cut has proven to be an effective scheme to solve a wide variety of segmentation problems in vision and graphics community. The main limitation of conventional graph-cut implementations is that they can hardly handle large images or videos because of high computational complexity. Even though there are some parallelization solutions, they commonly suffer from the problems of low parallelism (on CPU) or low convergence speed (on GPU). In this paper, we present a novel graph-cut algorithm that leverages a parallelized jump flooding technique and an heuristic push-relabel scheme to enhance the graph-cut process, namely, back-and-forth relabel, convergence detection, and block-wise push-relabel. The entire process is parallelizable on GPU, and outperforms the existing GPU-based implementations in terms of global convergence, information propagation, and performance. We design an intuitive user interface for specifying interested regions in cases of occlusions when handling video sequences. Experiments on a variety of data sets, including images (up to 15 K × 10 K), videos (up to 2.5 K × 1.5 K × 50), and volumetric data, achieve high-quality results and a maximum 40-fold (139-fold) speedup over conventional GPU (CPU-)-based approaches.

  5. Validation of a pediatric vocal fold nodule rating scale based on digital video images.

    Science.gov (United States)

    Nuss, Roger C; Ward, Jessica; Recko, Thomas; Huang, Lin; Woodnorth, Geralyn Harvey

    2012-01-01

    We sought to create a validated scale of vocal fold nodules in children, based on digital video clips obtained during diagnostic fiberoptic laryngoscopy. We developed a 4-point grading scale of vocal fold nodules in children, based upon short digital video clips. A tutorial for use of the scale, including schematic drawings of nodules, static images, and 10-second video clips, was presented to 36 clinicians with various levels of experience. The clinicians then reviewed 40 short digital video samples from pediatric patients evaluated in a voice clinic and rated the nodule size. Statistical analysis of the ratings provided inter-rater reliability scores. Thirty-six clinicians with various levels of experience rated a total of 40 short video clips. The ratings of experienced raters (14 pediatric otolaryngology attending physicians and pediatric otolaryngology fellows) were compared with those of inexperienced raters (22 nurses, medical students, otolaryngology residents, physician assistants, and pediatric speech-language pathologists). The overall intraclass correlation coefficient for the ratings of nodule size was quite good (0.62; 95% confidence interval, 0.52 to 0.74). The p value for experienced raters versus inexperienced raters was 0.1345, indicating no statistically significant difference in the ratings by these two groups. The intraclass correlation coefficient for intra-rater reliability was very high (0.89). The use of a dynamic scale of pediatric vocal fold nodule size most realistically represents the clinical assessment of nodules during an office visit. The results of this study show a high level of agreement between experienced and inexperienced raters. This scale can be used with a high level of reliability by clinicians with various levels of experience. A validated grading scale will help to assess long-term outcomes of pediatric patients with vocal fold nodules.

  6. A flexible software architecture for scalable real-time image and video processing applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  7. Interaction between High-Level and Low-Level Image Analysis for Semantic Video Object Extraction

    Directory of Open Access Journals (Sweden)

    Ebrahimi Touradj

    2004-01-01

    Full Text Available The task of extracting a semantic video object is split into two subproblems, namely, object segmentation and region segmentation. Object segmentation relies on a priori assumptions, whereas region segmentation is data-driven and can be solved in an automatic manner. These two subproblems are not mutually independent, and they can benefit from interactions with each other. In this paper, a framework for such interaction is formulated. This representation scheme based on region segmentation and semantic segmentation is compatible with the view that image analysis and scene understanding problems can be decomposed into low-level and high-level tasks. Low-level tasks pertain to region-oriented processing, whereas the high-level tasks are closely related to object-level processing. This approach emulates the human visual system: what one “sees” in a scene depends on the scene itself (region segmentation as well as on the cognitive task (semantic segmentation at hand. The higher-level segmentation results in a partition corresponding to semantic video objects. Semantic video objects do not usually have invariant physical properties and the definition depends on the application. Hence, the definition incorporates complex domain-specific knowledge and is not easy to generalize. For the specific implementation used in this paper, motion is used as a clue to semantic information. In this framework, an automatic algorithm is presented for computing the semantic partition based on color change detection. The change detection strategy is designed to be immune to the sensor noise and local illumination variations. The lower-level segmentation identifies the partition corresponding to perceptually uniform regions. These regions are derived by clustering in an -dimensional feature space, composed of static as well as dynamic image attributes. We propose an interaction mechanism between the semantic and the region partitions which allows to cope with multiple

  8. The research on binocular stereo video imaging and display system based on low-light CMOS

    Science.gov (United States)

    Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong

    2015-10-01

    It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.

  9. Image deblurring in video stream based on two-level image model

    Science.gov (United States)

    Mukovozov, Arseniy; Nikolaev, Dmitry; Limonova, Elena

    2017-03-01

    An iterative algorithm is proposed for blind multi-image deblurring of binary images. The binarity is the only prior restriction imposed on the image. Image formation model assumes convolution with arbitrary kernel and addition of a constant value. Penalty functional is composed using binarity constraint for regularization. The algorithm estimates the original image and distortion parameters by alternate reduction of two parts of this functional. Experimental results for natural (non-synthetic) data are present.

  10. ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.

    Science.gov (United States)

    Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K

    2012-06-01

    In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.

  11. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    Science.gov (United States)

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  12. Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image.

    Science.gov (United States)

    Songfan Yang; Bhanu, B

    2012-08-01

    Existing video-based facial expression recognition techniques analyze the geometry-based and appearance-based information in every frame as well as explore the temporal relation among frames. On the contrary, we present a new image-based representation and an associated reference image called the emotion avatar image (EAI), and the avatar reference, respectively. This representation leverages the out-of-plane head rotation. It is not only robust to outliers but also provides a method to aggregate dynamic information from expressions with various lengths. The approach to facial expression analysis consists of the following steps: 1) face detection; 2) face registration of video frames with the avatar reference to form the EAI representation; 3) computation of features from EAIs using both local binary patterns and local phase quantization; and 4) the classification of the feature as one of the emotion type by using a linear support vector machine classifier. Our system is tested on the Facial Expression Recognition and Analysis Challenge (FERA2011) data, i.e., the Geneva Multimodal Emotion Portrayal-Facial Expression Recognition and Analysis Challenge (GEMEP-FERA) data set. The experimental results demonstrate that the information captured in an EAI for a facial expression is a very strong cue for emotion inference. Moreover, our method suppresses the person-specific information for emotion and performs well on unseen data.

  13. Error protection and interleaving for wireless transmission of JPEG 2000 images and video.

    Science.gov (United States)

    Baruffa, Giuseppe; Micanti, Paolo; Frescura, Fabrizio

    2009-02-01

    The transmission of JPEG 2000 images or video over wireless channels has to cope with the high probability and burstyness of errors introduced by Gaussian noise, linear distortions, and fading. At the receiver side, there is distortion due to the compression performed at the sender side, and to the errors introduced in the data stream by the channel. Progressive source coding can also be successfully exploited to protect different portions of the data stream with different channel code rates, based upon the relative importance that each portion has on the reconstructed image. Unequal Error Protection (UEP) schemes are generally adopted, which offer a close to the optimal solution. In this paper, we present a dichotomic technique for searching the optimal UEP strategy, which lends ideas from existing algorithms, for the transmission of JPEG 2000 images and video over a wireless channel. Moreover, we also adopt a method of virtual interleaving to be used for the transmission of high bit rate streams over packet loss channels, guaranteeing a large PSNR advantage over a plain transmission scheme. These two protection strategies can also be combined to maximize the error correction capabilities.

  14. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  15. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  16. Individual differences in the processing of smoking-cessation video messages: An imaging genetics study.

    Science.gov (United States)

    Shi, Zhenhao; Wang, An-Li; Aronowitz, Catherine A; Romer, Daniel; Langleben, Daniel D

    2017-09-01

    Studies testing the benefits of enriching smoking-cessation video ads with attention-grabbing sensory features have yielded variable results. Dopamine transporter gene (DAT1) has been implicated in attention deficits. We hypothesized that DAT1 polymorphism is partially responsible for this variability. Using functional magnetic resonance imaging, we examined brain responses to videos high or low in attention-grabbing features, indexed by "message sensation value" (MSV), in 53 smokers genotyped for DAT1. Compared to other smokers, 10/10 homozygotes showed greater neural response to High- vs. Low-MSV smoking-cessation videos in two a priori regions of interest: the right temporoparietal junction and the right ventrolateral prefrontal cortex. These regions are known to underlie stimulus-driven attentional processing. Exploratory analysis showed that the right temporoparietal response positively predicted follow-up smoking behavior indexed by urine cotinine. Our findings suggest that responses to attention-grabbing features in smoking-cessation messages is affected by the DAT1 genotype. Copyright © 2017. Published by Elsevier B.V.

  17. In situ calibration of an infrared imaging video bolometer in the Large Helical Device

    Energy Technology Data Exchange (ETDEWEB)

    Mukai, K., E-mail: mukai.kiyofumi@LHD.nifs.ac.jp; Peterson, B. J. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292 (Japan); Pandya, S. N.; Sano, R. [The Graduate University for Advance Studies, 322-6 Oroshi-cho, Toki 509-5292 (Japan)

    2014-11-15

    The InfraRed imaging Video Bolometer (IRVB) is a powerful diagnostic to measure multi-dimensional radiation profiles in plasma fusion devices. In the Large Helical Device (LHD), four IRVBs have been installed with different fields of view to reconstruct three-dimensional profiles using a tomography technique. For the application of the measurement to plasma experiments using deuterium gas in LHD in the near future, the long-term effect of the neutron irradiation on the heat characteristics of an IRVB foil should be taken into account by regular in situ calibration measurements. Therefore, in this study, an in situ calibration system was designed.

  18. Positive effect on patient experience of video information given prior to cardiovascular magnetic resonance imaging: A clinical trial.

    Science.gov (United States)

    Ahlander, Britt-Marie; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth

    2017-11-17

    To evaluate the effect of video information given before cardiovascular magnetic resonance imaging on patient anxiety and to compare patient experiences of cardiovascular magnetic resonance imaging versus myocardial perfusion scintigraphy. To evaluate whether additional information has an impact on motion artefacts. Cardiovascular magnetic resonance imaging and myocardial perfusion scintigraphy are technically advanced methods for the evaluation of heart diseases. Although cardiovascular magnetic resonance imaging is considered to be painless, patients may experience anxiety due to the closed environment. A prospective randomised intervention study, not registered. The sample (n = 148) consisted of 97 patients referred for cardiovascular magnetic resonance imaging, randomised to receive either video information in addition to standard text-information (CMR-video/n = 49) or standard text-information alone (CMR-standard/n = 48). A third group undergoing myocardial perfusion scintigraphy (n = 51) was compared with the cardiovascular magnetic resonance imaging-standard group. Anxiety was evaluated before, immediately after the procedure and 1 week later. Five questionnaires were used: Cardiac Anxiety Questionnaire, State-Trait Anxiety Inventory, Hospital Anxiety and Depression scale, MRI Fear Survey Schedule and the MRI-Anxiety Questionnaire. Motion artefacts were evaluated by three observers, blinded to the information given. Data were collected between April 2015-April 2016. The study followed the CONSORT guidelines. The CMR-video group scored lower (better) than the cardiovascular magnetic resonance imaging-standard group in the factor Relaxation (p = .039) but not in the factor Anxiety. Anxiety levels were lower during scintigraphic examinations compared to the CMR-standard group (p magnetic resonance imaging increased by adding video information prior the exam, which is important in relation to perceived quality in nursing. No effect was seen on motion

  19. A comparison between flexible electrogoniometers, inclinometers and three-dimensional video analysis system for recording neck movement.

    Science.gov (United States)

    Carnaz, Letícia; Moriguchi, Cristiane S; de Oliveira, Ana Beatriz; Santiago, Paulo R P; Caurin, Glauco A P; Hansson, Gert-Åke; Coury, Helenice J C Gil

    2013-11-01

    This study compared neck range of movement recording using three different methods goniometers (EGM), inclinometers (INC) and a three-dimensional video analysis system (IMG) in simultaneous and synchronized data collection. Twelve females performed neck flexion-extension, lateral flexion, rotation and circumduction. The differences between EGM, INC, and IMG were calculated sample by sample. For flexion-extension movement, IMG underestimated the amplitude by 13%; moreover, EGM showed a crosstalk of about 20% for lateral flexion and rotation axes. In lateral flexion movement, all systems showed similar amplitude and the inter-system differences were moderate (4-7%). For rotation movement, EGM showed a high crosstalk (13%) for flexion-extension axis. During the circumduction movement, IMG underestimated the amplitude of flexion-extension movements by about 11%, and the inter-system differences were high (about 17%) except for INC-IMG regarding lateral flexion (7%) and EGM-INC regarding flexion-extension (10%). For application in workplace, INC presents good results compared to IMG and EGM though INC cannot record rotation. EGM should be improved in order to reduce its crosstalk errors and allow recording of the full neck range of movement. Due to non-optimal positioning of the cameras for recording flexion-extension, IMG underestimated the amplitude of these movements. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L [UT MD Anderson Cancer Center, Houston, TX (United States); UT Graduate School of Biomedical Sciences, Houston, TX (United States); Yang, J; Beadle, B [UT MD Anderson Cancer Center, Houston, TX (United States)

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  1. Influence of image compression on the quality of UNB pan-sharpened imagery: a case study with security video image frames

    Science.gov (United States)

    Adhamkhiabani, Sina Adham; Zhang, Yun; Fathollahi, Fatemeh

    2014-05-01

    UNB Pan-sharp, also named FuzeGo, is an image fusion technique to produce high resolution color satellite images by fusing a high resolution panchromatic (monochrome) image and a low resolution multispectral (color) image. This is an effective solution that modern satellites have been using to capture high resolution color images at an ultra-high speed. Initial research on security camera systems shows that the UNB Pan-sharp technique can also be utilized to produce high resolution and high sensitive color video images for various imaging and monitoring applications. Based on UNB Pansharp technique, a video camera prototype system, called the UNB Super-camera system, was developed that captures high resolution panchromatic images and low resolution color images simultaneously, and produces real-time high resolution color video images on the fly. In a separate study, it was proved that UNB Super Camera outperforms conventional 1-chip and 3-chip color cameras in image quality, especially when the illumination is low such as in room lighting. In this research the influence of image compression on the quality of UNB Pan-sharped high resolution color images is evaluated, since image compression is widely used in still and video cameras to reduce data volume and speed up data transfer. The results demonstrate that UNB Pan-sharp can consistently produce high resolution color images that have the same detail as the input high resolution panchromatic image and the same color of the input low resolution color image, regardless the compression ratio and lighting condition. In addition, the high resolution color images produced by UNB Pan-sharp have higher sensitivity (signal to noise ratio) and better edge sharpness and color rendering than those of the same generation 1-chip color camera, regardless the compression ratio and lighting condition.

  2. Security SVGA image sensor with on-chip video data authentication and cryptographic circuit

    Science.gov (United States)

    Stifter, P.; Eberhardt, K.; Erni, A.; Hofmann, K.

    2005-10-01

    Security applications of sensors in a networking environment has a strong demand of sensor authentication and secure data transmission due to the possibility of man-in-the-middle and address spoofing attacks. Therefore a secure sensor system should fulfil the three standard requirements of cryptography, namely data integrity, authentication and non-repudiation. This paper is intended to present the unique sensor development by AIM, the so called SecVGA, which is a high performance, monochrome (B/W) CMOS active pixel image sensor. The device is capable of capturing still and motion images with a resolution of 800x600 active pixels and converting the image into a digital data stream. The distinguishing feature of this development in comparison to standard imaging sensors is the on-chip cryptographic engine which provides the sensor authentication, based on a one-way challenge/response protocol. The implemented protocol results in the exchange of a session-key which will secure the following video data transmission. This is achieved by calculating a cryptographic checksum derived from a stateful hash value of the complete image frame. Every sensor contains an EEPROM memory cell for the non-volatile storage of a unique identifier. The imager is programmable via a two-wire I2C compatible interface which controls the integration time, the active window size of the pixel array, the frame rate and various operating modes including the authentication procedure.

  3. Reliable assessment of general surgeons' non-technical skills based on video-recordings of patient simulated scenarios.

    Science.gov (United States)

    Spanager, Lene; Beier-Holgersen, Randi; Dieckmann, Peter; Konge, Lars; Rosenberg, Jacob; Oestergaard, Doris

    2013-11-01

    Nontechnical skills are essential for safe and efficient surgery. The aim of this study was to evaluate the reliability of an assessment tool for surgeons' nontechnical skills, Non-Technical Skills for Surgeons dk (NOTSSdk), and the effect of rater training. A 1-day course was conducted for 15 general surgeons in which they rated surgeons' nontechnical skills in 9 video recordings of scenarios simulating real intraoperative situations. Data were gathered from 2 sessions separated by a 4-hour training session. Interrater reliability was high for both pretraining ratings (Cronbach's α = .97) and posttraining ratings (Cronbach's α = .98). There was no statistically significant development in assessment skills. The D study showed that 2 untrained raters or 1 trained rater was needed to obtain generalizability coefficients >.80. The high pretraining interrater reliability indicates that videos were easy to rate and Non-Technical Skills for Surgeons dk easy to use. This implies that Non-Technical Skills for Surgeons dk (NOTSSdk) could be an important tool in surgical training, potentially improving safety and quality for surgical patients. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. In the here and now: enhanced motor corticospinal excitability in novices when watching live compared to video recorded dance.

    Science.gov (United States)

    Jola, Corinne; Grosbras, Marie-Hélène

    2013-01-01

    Enhanced motor corticospinal excitability (MCE) in passive action observation is thought to signify covert motor resonance with the actions seen. Actions performed by others are an important social stimulus and thus, motor resonance is prevalent during social interaction. However, most studies employ simple/short snippets of recorded movements devoid of any real-life social context, which has recently been criticized for lacking ecological validity. Here, we investigated whether the co-presence of the actor and the spectator has an impact on motor resonance by comparing novices' MCE for the finger (FDI) and the arm (ECR) with single-pulse transcranial magnetic stimulation when watching five-minute solos of ballet dance, Bharatanatyam (Indian dance) and an acting control condition either live or on video. We found that (1) MCE measured in the arm muscle was significantly enhanced in the live compared to the video condition, (2) differences across performances were only evident in the live condition, and (3) our novices reported enjoying the live presentations significantly more. We suggest that novice spectators' MCE is susceptible to the performers' live presence.

  5. New technologies in the Physical Education class. A positive experience with the digital video recording and vertical jump

    Directory of Open Access Journals (Sweden)

    Daniel Rojano Ortega

    2010-01-01

    Full Text Available The objective of the Basic Competences is to highlight the essential learning of the Secondary School Curriculum. The fourth Basic Competence introduces in the Secondary School Program the use of the Information and Communication Technologies as an essential element to be informed, to learn and to communicate. To that effect, this article tries to bring the new technologies to the Physical Education Class, specifically to the analysis of the vertical jump. This jump has been traditionally evaluated with the Sargent’s test but this test has some errors which derive from the measuring process. Nowadays there are new very precise instruments often used in sports for the analysis of the vertical jump, but their high prices make it difficult to introduce them in the school. With this article we want to show that the digital video recording and the video edition programs constitute a very appropriate way to evaluate the vertical jump because it causes in the students great interest and implication.

  6. A Study on the Read/Write Experimental Results for a High-Definition Digital Video Disc Recorder using Blue-Laser Diode

    Science.gov (United States)

    Aoki, Ikuo

    2001-03-01

    At present, tape media are mainly used for video recording globally. However, in the near future, disc media will come into general use, as they possess many strong points compared with tape media. Thus, we are now researching the development of a high-definition digital video disc recorder with high capacity, high data transfer rate, and low cost. Our target specifications are 15 GB to 18 GB, and over 35 Mbps, using a 120 mm phase change disc and a blue-laser diode. To confirm that it is possible, numerous sample discs were manufactured and experiments were carried out. We succeeded in obtaining good experimental results. In this study, we demonstrate the possibility of realizing a high-definition digital video disc recorder using a 120 mm phase change disc and a blue-laser diode without using a disc cartridge or any extraordinary method that increases the cost.

  7. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    Science.gov (United States)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  8. Automatic lameness detection based on consecutive 3D-video recordings

    NARCIS (Netherlands)

    Hertem, van T.; Viazzi, S.; Steensels, M.; Maltz, E.; Antler, A.; Alchanatis, V.; Schlageter-Tello, A.; Lokhorst, C.; Romanini, C.E.B.; Bahr, C.; Berckmans, D.; Halachmi, I.

    2014-01-01

    Manual locomotion scoring for lameness detection is a time-consuming and subjective procedure. Therefore, the objective of this study is to optimise the classification output of a computer vision based algorithm for automated lameness scoring. Cow gait recordings were made during four consecutive

  9. Direct ultrasound to video registration using photoacoustic markers from a single image pose

    Science.gov (United States)

    Cheng, Alexis; Guo, Xiaoyu; Kang, Hyun Jae; Choti, Michael A.; Kang, Jin U.; Taylor, Russell H.; Boctor, Emad M.

    2015-03-01

    Fusion of video and other imaging modalities is common in modern surgical scenarios to provide surgeons with additional information. Doing so requires the use of interventional guidance equipment and surgical navigation systems to register the tools and devices used in surgery with each other. In this work, we focus explicitly on registering ultrasound with a stereocamera system using photoacoustic markers. Previous work has shown that photoacoustic markers can be used to register three-dimensional ultrasound with video resulting in target registration errors lower than the current available systems. Photoacoustic markers are non-collinear laser spots projected onto some surface. They can be simultaneously visualized by a stereocamera system and in an ultra-sound volume because of the photoacoustic effect. This work replaces the three-dimensional ultrasound volume with images from a single ultrasound image pose. While an ultrasound volume provides more information than an ultrasound image, it has its disadvantages such as higher cost and slower acquisition rate. However, in general, it is difficult to register two-dimensional with three-dimensional spatial data. We propose the use of photoacoustic markers viewed by a convex array ultrasound transducer. Each photoacoustic markers wavefront provides information on its elevational position, resulting in three-dimensional spatial data. This development enhances this methods practicality as convex array transducers are more common in surgical practice than three-dimensional transducers. This work is demonstrated on a synthetic phantom. The resulting target registration error for this experiment was 2.47mm and the standard deviations was 1.29mm, which is comparable to current available systems.

  10. Proposal of a Video-recording System for the Assessment of Bell's Palsy: Methodology and Preliminary Results.

    Science.gov (United States)

    Monini, Simonetta; Marinozzi, Franco; Atturo, Francesca; Bini, Fabiano; Marchelletta, Silvia; Barbara, Maurizio

    2017-09-01

    To propose a new objective video-recording procedure to assess and monitor over time the severity of facial nerve palsy. No objective methods for facial palsy (FP) assessment are universally accepted. The face of subjects presenting with different degrees of facial nerve deficit, as measured by the House-Brackmann (HB) grading system, was videotaped after positioning, at specific points, 10 gray circular markers made of a retroreflective material. Video-recording included the resting position and six ordered facial movements. Editing and data elaboration was performed using a software instructed to assess marker distances. From the differences of the marker distances between the two sides was then extracted a score. The higher the FP degree, the higher the score registered during each movement. The statistical significance differed during the various movements between the different FP degrees, being uniform when closing the eyes gently; whereas when wrinkling the nose, there was no difference between the HB grade III and IV groups and, when smiling, no difference was evidenced between the HB grade IV and V groups.The global range index, which represents the overall degree of FP, was between 6.2 and 7.9 in the normal subjects (HB grade I); between 10.6 and 18.91 in HB grade II; between 22.19 and 33.06 in HB grade III; between 38.61 and 49.75 in HB grade IV; and between 50.97 and 66.88 in HB grade V. The proposed objective methodology could provide numerical data that correspond to the different degrees of FP, as assessed by the subjective HB grading system. These data can in addition be used singularly to score selected areas of the paralyzed face when recovery occurs with a different timing in the different face regions.

  11. Realization of a video-rate distributed aperture millimeter-wave imaging system using optical upconversion

    Science.gov (United States)

    Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis

    2013-05-01

    Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.

  12. Coxofemoral joint kinematics using video fluoroscopic images of treadmill-walking cats: development of a technique to assess osteoarthritis-associated disability.

    Science.gov (United States)

    Guillot, Martin; Gravel, Pierre; Gauthier, Marie-Lou; Leblond, Hugues; Tremblay, Maurice; Rossignol, Serge; Martel-Pelletier, Johanne; Pelletier, Jean-Pierre; de Guise, Jacques A; Troncy, Eric

    2015-02-01

    The objectives of this pilot study were to develop a video fluoroscopy kinematics method for the assessment of the coxofemoral joint in cats with and without osteoarthritis (OA)-associated disability. Two non-OA cats and four cats affected by coxofemoral OA were evaluated by video fluoroscopy. Video fluoroscopic images of the coxofemoral joints were captured at 120 frames/s using a customized C-arm X-ray system while cats walked freely on a treadmill at 0.4 m/s. The angle patterns over time of the coxofemoral joints were extracted using a graphic user interface following four steps: (i) correction for image distortion; (ii) image denoising and contrast enhancement; (iii) frame-to-frame anatomical marker identification; and (iv) statistical gait analysis. Reliability analysis was performed. The cats with OA presented greater intra-subject stride and gait cycle variability. Three cats with OA presented a left-right asymmetry in the range of movement of the coxofemoral joint angle in the sagittal plane (two with no overlap of the 95% confidence interval, and one with only a slight overlap) consistent with their painful OA joint, and a longer gait cycle duration. Reliability analysis revealed an absolute variation in the coxofemoral joint angle of 2º-6º, indicating that the two-dimensional video fluoroscopy technique provided reliable data. Improvement of this method is recommended: variability would likely be reduced if a larger field of view could be recorded, allowing the identification and tracking of each femoral axis, rather than the trochanter landmarks. The range of movement of the coxofemoral joint has the potential to be an objective marker of OA-associated disability. © ISFM and AAFP 2014.

  13. Face identification in videos from mobile cameras

    OpenAIRE

    Mu, Meiru; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face matcher on still images would give many false alarms due to the uncontrolled conditions. This paper presents an approach to identify faces in videos from mobile cameras. A commercial face matcher F...

  14. Video-rate bioluminescence imaging of matrix metalloproteinase-2 secreted from a migrating cell.

    Directory of Open Access Journals (Sweden)

    Takahiro Suzuki

    Full Text Available BACKGROUND: Matrix metalloproteinase-2 (MMP-2 plays an important role in cancer progression and metastasis. MMP-2 is secreted as a pro-enzyme, which is activated by the membrane-bound proteins, and the polarized distribution of secretory and the membrane-associated MMP-2 has been investigated. However, the real-time visualizations of both MMP-2 secretion from the front edge of a migration cell and its distribution on the cell surface have not been reported. METHODOLOGY/PRINCIPAL FINDINGS: The method of video-rate bioluminescence imaging was applied to visualize exocytosis of MMP-2 from a living cell using Gaussia luciferase (GLase as a reporter. The luminescence signals of GLase were detected by a high speed electron-multiplying charge-coupled device camera (EM-CCD camera with a time resolution within 500 ms per image. The fusion protein of MMP-2 to GLase was expressed in a HeLa cell and exocytosis of MMP-2 was detected in a few seconds along the leading edge of a migrating HeLa cell. The membrane-associated MMP-2 was observed at the specific sites on the bottom side of the cells, suggesting that the sites of MMP-2 secretion are different from that of MMP-2 binding. CONCLUSIONS: We were the first to successfully demonstrate secretory dynamics of MMP-2 and the specific sites for polarized distribution of MMP-2 on the cell surface. The video-rate bioluminescence imaging using GLase is a useful method to investigate distribution and dynamics of secreted proteins on the whole surface of polarized cells in real time.

  15. Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field

    Science.gov (United States)

    Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen

    2017-10-01

    Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.

  16. Cryptanalysis of a spatiotemporal chaotic image/video cryptosystem and its improved version

    Energy Technology Data Exchange (ETDEWEB)

    Ge Xin, E-mail: gexiner@gmail.co [Zhengzhou Information Science and Technology Institute, Zhengzhou 450002, Henan (China); Liu Fenlin; Lu Bin; Wang Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou 450002, Henan (China)

    2011-01-31

    Recently, a spatiotemporal chaotic image/video cryptosystem was proposed by Lian. Shortly after its publication, Rhouma et al. proposed two attacks on the cryptosystem. They as well introduced an improved cryptosystem which is more secured under attacks (R. Rhouma, S. Belghith, Phys. Lett. A 372 (2008) 5790) . This Letter re-examines securities of Lian's cryptosystem and its improved version, by showing that not all details of the ciphered image of Lian's cryptosystem can be recovered by Rhouma et al.'s attacks due to the incorrectly recovered part of the sign-bits of the AC coefficients with an inappropriately chosen image. As a result, modifications of Rhouma et al.'s attacks are proposed in order to recover the ciphered image of Lian's cryptosystem completely; then based on the modifications, two new attacks are proposed to break the improved version of Lian's cryptosystem. Finally, experimental results illustrate the validity of our analysis.

  17. Approximate Circuits in Low-Power Image and Video Processing: The Approximate Median Filter

    Directory of Open Access Journals (Sweden)

    L. Sekanina

    2017-09-01

    Full Text Available Low power image and video processing circuits are crucial in many applications of computer vision. Traditional techniques used to reduce power consumption in these applications have recently been accompanied by circuit approximation methods which exploit the fact that these applications are highly error resilient and, hence, the quality of image processing can be traded for power consumption. On the basis of a literature survey, we identified the components whose implementations are the most frequently approximated and the methods used for obtaining these approximations. One of the components is the median image filter. We propose, evaluate and compare two approximation strategies based on Cartesian genetic programming applied to approximate various common implementations of the median filter. For filters developed using these approximation strategies, trade-offs between the quality of filtering and power consumption are investigated. Under conditions of our experiments we conclude that better trade-offs are achieved when the image filter is evolved from scratch rather than a conventional filter is approximated.

  18. Higher-order singular value decomposition-based discrete fractional random transform for simultaneous compression and encryption of video images

    Science.gov (United States)

    Wang, Qingzhu; Chen, Xiaoming; Zhu, Yihai

    2017-09-01

    Existing image compression and encryption methods have several shortcomings: they have low reconstruction accuracy and are unsuitable for three-dimensional (3D) images. To overcome these limitations, this paper proposes a tensor-based approach adopting tensor compressive sensing and tensor discrete fractional random transform (TDFRT). The source video images are measured by three key-controlled sensing matrices. Subsequently, the resulting tensor image is further encrypted using 3D cat map and the proposed TDFRT, which is based on higher-order singular value decomposition. A multiway projection algorithm is designed to reconstruct the video images. The proposed algorithm can greatly reduce the data volume and improve the efficiency of the data transmission and key distribution. The simulation results validate the good compression performance, efficiency, and security of the proposed algorithm.

  19. Comparison Of Processing Time Of Different Size Of Images And Video Resolutions For Object Detection Using Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Yogesh Yadav

    2017-01-01

    Full Text Available Object Detection with small computation cost and processing time is a necessity in diverse domains such as traffic analysis security cameras video surveillance etc .With current advances in technology and decrease in prices of image sensors and video cameras the resolution of captured images is more than 1MP and has higher frame rates. This implies a considerable data size that needs to be processed in a very short period of time when real-time operations and data processing is needed. Real time video processing with high performance can be achieved with GPU technology. The aim of this study is to evaluate the influence of different image and video resolutions on the processing time number of objects detections and accuracy of the detected object. MOG2 algorithm is used for processing video input data with GPU module. Fuzzy interference system is used to evaluate the accuracy of number of detected object and to show the difference between CPU and GPU computing methods.

  20. Video-mosaicking of in vivo reflectance confocal microscopy images for noninvasive examination of skin lesion (Conference Presentation)

    Science.gov (United States)

    Kose, Kivanc; Gou, Mengran; Yelamos, Oriol; Cordova, Miguel A.; Rossi, Anthony; Nehal, Kishwer S.; Camps, Octavia I.; Dy, Jennifer G.; Brooks, Dana H.; Rajadhyaksha, Milind

    2017-02-01

    In this report we describe a computer vision based pipeline to convert in-vivo reflectance confocal microscopy (RCM) videos collected with a handheld system into large field of view (FOV) mosaics. For many applications such as imaging of hard to access lesions, intraoperative assessment of MOHS margins, or delineation of lesion margins beyond clinical borders, raster scan based mosaicing techniques have clinically significant limitations. In such cases, clinicians often capture RCM videos by freely moving a handheld microscope over the area of interest, but the resulting videos lose large-scale spatial relationships. Videomosaicking is a standard computational imaging technique to register, and stitch together consecutive frames of videos into large FOV high resolution mosaics. However, mosaicing RCM videos collected in-vivo has unique challenges: (i) tissue may deform or warp due to physical contact with the microscope objective lens, (ii) discontinuities or "jumps" between consecutive images and motion blur artifacts may occur, due to manual operation of the microscope, and (iii) optical sectioning and resolution may vary between consecutive images due to scattering and aberrations induced by changes in imaging depth and tissue morphology. We addressed these challenges by adapting or developing new algorithmic methods for videomosaicking, specifically by modeling non-rigid deformations, followed by automatically detecting discontinuities (cut locations) and, finally, applying a data-driven image stitching approach that fully preserves resolution and tissue morphologic detail without imposing arbitrary pre-defined boundaries. We will present example mosaics obtained by clinical imaging of both melanoma and non-melanoma skin cancers. The ability to combine freehand mosaicing for handheld microscopes with preserved cellular resolution will have high impact application in diverse clinical settings, including low-resource healthcare systems.

  1. Evaluation of a new semiautomated external defibrillator technology: a live cases video recording study.

    Science.gov (United States)

    Maes, Frédéric; Marchandise, Sébastien; Boileau, Laurianne; Le Polain de Waroux, Jean-Benoît; Scavée, Christophe

    2015-06-01

    To determine the effect of a new automated external defibrillator (AED) system connected by General Packet Radio Service (GPRS) to an external call centre in assisting novices in a sudden cardiac arrest situation. Prospective, interventional study. Layperson volunteers were first asked to complete a survey about their knowledge and ability to give cardiopulmonary resuscitation (CPR) and use an AED. A simulated cardiac arrest scenario using a CPR manikin was then presented to volunteers. A telephone and semi-AED were available in the same room. The AED was linked to a call centre, which provided real-time information to 'bystanders' and emergency services via GPRS/GPS technology. The scene was videotaped to avoid any interaction with examiners. A standardised check list was used to record correct actions. 85 volunteers completed questionnaires and were recorded. Mean age was 44±16, and 49% were male; 38 (45%) had prior CPR training or felt comfortable intervening in a sudden cardiac arrest victim; 40% felt they could deliver a shock using an AED. During the scenarios, 56 (66%) of the participants used the AED and 53 (62%) successfully delivered an electrical shock. Mean time to defibrillation was 2 min 29 s. Only 24 (28%) participants dialled the correct emergency response number (112); the live-assisted GPRS AED allowed alerted emergency services in 38 other cases. CPR was initiated in 63 (74%) cases, 26 (31%) times without prompting and 37 (44%) times after prompting by the AED. Although knowledge of the general population appears to be inadequate with regard to AED locations and recognition, live-assisted devices with GPS-location may improve emergency care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Image and video based remote target localization and tracking on smartphones

    Science.gov (United States)

    Wang, Qia; Lobzhanidze, Alex; Jang, Hyun; Zeng, Wenjun; Shang, Yi; Yang, Jingyu

    2012-06-01

    Smartphones are becoming popular nowadays not only because of its communication functionality but also, more importantly, its powerful sensing and computing capability. In this paper, we describe a novel and accurate image and video based remote target localization and tracking system using the Android smartphones, by leveraging its built-in sensors such as camera, digital compass, GPS, etc. Even though many other distance estimation or localization devices are available, our all-in-one, easy-to-use localization and tracking system on low cost and commodity smartphones is first of its kind. Furthermore, smartphones' exclusive user-friendly interface has been effectively taken advantage of by our system to facilitate low complexity and high accuracy. Our experimental results show that our system works accurately and efficiently.

  3. [Sexuality and the human body: the subject's view through video images].

    Science.gov (United States)

    Vargas, E; Siqueira, V H

    1999-11-01

    This study analyzes images of the body linked to sexual and reproductive behavior found in the communication processes mediated by so-called educational videos. In the relationship between subject and technology, the paper is intended to characterize the discourses and the view or perspective currently shaping health education practices. Focusing on the potential in the relationship between the enunciator and subjects represented in the text and the interaction between health professionals and messages, the study attempts to characterize the discourses and questions providing the basis for a given view of the body and sexuality. The study was conducted in the years 1996-1997 and focused on health professionals from the public health system. The results show a concept of sexuality that tends to generalize the meaning ascribed to sexual experience, ignoring the various ways by which different culturally defined groups attribute meaning to the body.

  4. Seizure semiology reflects spread from frontal to temporal lobe: evolution of hyperkinetic to automotor seizures as documented by invasive EEG video recordings.

    Science.gov (United States)

    Tezer, Fadime Irsel; Agan, Kadriye; Borggraefe, Ingo; Noachtar, Soheyl

    2013-09-01

    This patient report demonstrates the importance of seizure evolution in the localising value of seizure semiology. Spread of epileptic activity from frontal to temporal lobe, as demonstrated by invasive recordings, was reflected by change from hyperkinetic movements to arrest of activity with mild oral and manual automatisms. [Published with video sequences].

  5. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  6. Three-dimensional tomographic imaging for dynamic radiation behavior study using infrared imaging video bolometers in large helical device plasma

    Energy Technology Data Exchange (ETDEWEB)

    Sano, Ryuichi; Iwama, Naofumi [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Peterson, Byron J.; Kobayashi, Masahiro; Mukai, Kiyofumi [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); SOKENDAI (The Graduate University for Advanced Studies), Hayama, Kanagawa 240-0193 (Japan); Teranishi, Masaru [Hiroshima Institute of Technology, 2-1-1, Miyake, Saeki-ku, Hiroshima 731-5193 (Japan); Pandya, Shwetang N. [Institute of Plasma Research, Near Indira Bridge, Bhat Village, Gandhinagar, Gujarat 382428 (India)

    2016-05-15

    A three-dimensional (3D) tomography system using four InfraRed imaging Video Bolometers (IRVBs) has been designed with a helical periodicity assumption for the purpose of plasma radiation measurement in the large helical device. For the spatial inversion of large sized arrays, the system has been numerically and experimentally examined using the Tikhonov regularization with the criterion of minimum generalized cross validation, which is the standard solver of inverse problems. The 3D transport code EMC3-EIRENE for impurity behavior and related radiation has been used to produce phantoms for numerical tests, and the relative calibration of the IRVB images has been carried out with a simple function model of the decaying plasma in a radiation collapse. The tomography system can respond to temporal changes in the plasma profile and identify the 3D dynamic behavior of radiation, such as the radiation enhancement that starts from the inboard side of the torus, during the radiation collapse. The reconstruction results are also consistent with the output signals of a resistive bolometer. These results indicate that the designed 3D tomography system is available for the 3D imaging of radiation. The first 3D direct tomographic measurement of a magnetically confined plasma has been achieved.

  7. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution – an application in higher education

    NARCIS (Netherlands)

    Jan Kuijten; Ajda Ortac; Hans Maier; Gert de Heer

    2015-01-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels).

  8. Enhancement Of Penetrant-Inspection Images

    Science.gov (United States)

    Wilson, Rhonda C.

    1990-01-01

    Proposed computerized video system processes images of fluorescent dyes absorbed in flaws in welds. Video camera, held by operator or by remote manipulator, views weld illuminated by visible white and ultraviolet light. Images of penetrating dye in cracks and voids in weld joint appear on video monitor. Fluorescent features enhanced by software to facilitate identification of true flaws and record important data.

  9. Image Segmentation and Feature Extraction for Recognizing Strokes in Tennis Game Videos

    NARCIS (Netherlands)

    Zivkovic, Z.; van der Heijden, Ferdinand; Petkovic, M.; Jonker, Willem; Langendijk, R.L.; Heijnsdijk, J.W.J.; Pimentel, A.D.; Wilkinson, M.H.F.

    This paper addresses the problem of recognizing human actions from video. Particularly, the case of recognizing events in tennis game videos is analyzed. Driven by our domain knowledge, a robust player segmentation algorithm is developed for real video data. Further, we introduce a number of novel

  10. Diurnal variations in tear film break-up time determined in healthy subjects by software-assisted interpretation of tear film video recordings.

    Science.gov (United States)

    Pena-Verdeal, Hugo; García-Resúa, Carlos; Ramos, Lucía; Yebra-Pimentel, Eva; Giráldez, Ma Jesús

    2016-03-01

    This study was designed to examine diurnal variations in tear film break-up time (BUT) and maximum blink interval (MBI) and to assess two different ways of calculating these variables on video recordings of the BUT test interpreted with the help of especially designed software. The repeatability of interpreting BUT video recordings was also addressed. Twenty-six healthy young adults were enrolled after ruling out dry eye according to a battery of tests (ocular surface disease index, McMonnies questionnaire, Schirmer test, phenol red test and corneal staining). BUT and maximum blink interval were determined on video-recordings of the BUT test conducted over a day in four sessions (9.30 am, 12.30 pm, 3.30 pm and 6.30 pm). In each session, the test was repeated three times to give three videos in which three BUT and MBI values were obtained by a masked observer. BUT and MBI were determined by averaging the three measurements and by averaging only the two closest measurements. Finally, two further experienced observers re-examined the videos to assess the repeatability of the BUT measurements made. No diurnal variation in BUT was observed regardless of whether three or two video measurements were averaged. Significant correlation was detected between BUT and MBI. Inter-observer repeatability was better when BUT times were no longer than 15 seconds. Tear film BUT was not influenced by the time of day and moderate to strong correlation with MBI was observed in all four sessions. The software-assisted method proved useful and identified the need to clarify the BUT end-point and to limit the test to 15 seconds to improve observer repeatability. © 2016 Optometry Australia.

  11. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope

    Science.gov (United States)

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.

    2015-10-01

    Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.

  12. Surgical tool detection in cataract surgery videos through multi-image fusion inside a convolutional neural network.

    Science.gov (United States)

    Al Hajj, Hassan; Lamard, Mathieu; Charriere, Katia; Cochener, Beatrice; Quellec, Gwenole

    2017-07-01

    The automatic detection of surgical tools in surgery videos is a promising solution for surgical workflow analysis. It paves the way to various applications, including surgical workflow optimization, surgical skill evaluation and real-time warning generation. A solution based on convolutional neural networks (CNNs) is proposed in this paper. Unlike existing solutions, the proposed CNN does not analyze images independently. it analyzes sequences of consecutive images. Features extracted from each image by the CNN are fused inside the network using the optical flow. For improved performance, this multi-image fusion strategy is also applied while training the CNN. The proposed framework was evaluated in a dataset of 30 cataract surgery videos (6 hours of videos). Ten tool categories were defined by surgeons. The proposed system was able to detect each of these categories with a high area under the ROC curve (0.953 ≤ Az ≤ 0.987). The proposed detector, based on multi-image fusion, was significantly more sensitive and specific than a similar system analyzing images independently (p = 2.98 × 10(-6) and p = 2.07 × 10(-3), respectively).

  13. Multi-hypothesis tracking of the tongue surface in ultrasound video recordings of normal and impaired speech.

    Science.gov (United States)

    Laporte, Catherine; Ménard, Lucie

    2018-02-01

    Characterizing tongue shape and motion, as they appear in real-time ultrasound (US) images, is of interest to the study of healthy and impaired speech production. Quantitative anlaysis of tongue shape and motion requires that the tongue surface be extracted in each frame of US speech recordings. While the literature proposes several automated methods for this purpose, these either require large or very well matched training sets, or lack robustness in the presence of rapid tongue motion. This paper presents a new robust method for tongue tracking in US images that combines simple tongue shape and motion models derived from a small training data set with a highly flexible active contour (snake) representation and maintains multiple possible hypotheses as to the correct tongue contour via a particle filtering algorithm. The method was tested on a database of large free speech recordings from healthy and impaired speakers and its accuracy was measured against the manual segmentations obtained for every image in the database. The proposed method achieved mean sum of distances errors of 1.69 ± 1.10 mm, and its accuracy was not highly sensitive to training set composition. Furthermore, the proposed method showed improved accuracy, both in terms of mean sum of distances error and in terms of linguistically meaningful shape indices, compared to the three publicly available tongue tracking software packages Edgetrak, TongueTrack and Autotrace. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Slit-lamp management in contact lenses laboratory classes: learning upgrade with monitor visualization of webcam video recordings

    Science.gov (United States)

    Arines, Justo; Gargallo, Ana

    2014-07-01

    The training in the use of the slit lamp has always been difficult for students of the degree in Optics and Optometry. Instruments with associated cameras helps a lot in this task, they allow teachers to observe and control if the students evaluate the eye health appropriately, correct use errors and show them how to do it with a visual demonstration. However, these devices are more expensive than those that do not have an integrated camera connected to a display unit. With the aim to improve students' skills in the management of slit lamp, we have adapted USB HD webcams (Microsoft Lifecam HD-5000) to the objectives of the slit lamps available in our contact lenses laboratory room. The webcams are connected to a PC running Linux Ubuntu 11.0; therefore that is a low-cost device. Our experience shows that single method has several advantages. It allows us to take pictures with a good quality of different conditions of the eye health; we can record videos of eye evaluation and make demonstrations of the instrument. Besides it increases the interactions between students because they could see what their colleagues are doing and take conscious of the mistakes, helping and correcting each others. It is a useful tool in the practical exam too. We think that the method supports the training in optometry practice and increase the students' confidence without a huge outlay.

  15. A Review on Video/Image Authentication and Tamper Detection Techniques

    Science.gov (United States)

    Parmar, Zarna; Upadhyay, Saurabh

    2013-02-01

    With the innovations and development in sophisticated video editing technology and a wide spread of video information and services in our society, it is becoming increasingly significant to assure the trustworthiness of video information. Therefore in surveillance, medical and various other fields, video contents must be protected against attempt to manipulate them. Such malicious alterations could affect the decisions based on these videos. A lot of techniques are proposed by various researchers in the literature that assure the authenticity of video information in their own way. In this paper we present a brief survey on video authentication techniques with their classification. These authentication techniques are generally classified into following categories: digital signature based techniques, watermark based techniques, and other authentication techniques.

  16. CREATION OF 3D MODELS FROM LARGE UNSTRUCTURED IMAGE AND VIDEO DATASETS

    Directory of Open Access Journals (Sweden)

    J. Hollick

    2013-05-01

    Full Text Available Exploration of various places using low-cost camera solutions over decades without having a photogrammetric application in mind has resulted in large collections of images and videos that may have significant cultural value. The purpose of collecting this data is often to provide a log of events and therefore the data is often unstructured and of varying quality. Depending on the equipment used there may be approximate location data available for the images but the accuracy of this data may also be of varying quality. In this paper we present an approach that can deal with these conditions and process datasets of this type to produce 3D models. Results from processing the dataset collected during the discovery and subsequent exploration of the HMAS Sydney and HSK Kormoran wreck sites shows the potential of our approach. The results are promising and show that there is potential to retrieve significantly more information from many of these datasets than previously thought possible.

  17. Jointly optimized spatial prediction and block transform for video and image coding.

    Science.gov (United States)

    Han, Jingning; Saxena, Ankur; Melkote, Vinay; Rose, Kenneth

    2012-04-01

    This paper proposes a novel approach to jointly optimize spatial prediction and the choice of the subsequent transform in video and image compression. Under the assumption of a separable first-order Gauss-Markov model for the image signal, it is shown that the optimal Karhunen-Loeve Transform, given available partial boundary information, is well approximated by a close relative of the discrete sine transform (DST), with basis vectors that tend to vanish at the known boundary and maximize energy at the unknown boundary. The overall intraframe coding scheme thus switches between this variant of the DST named asymmetric DST (ADST), and traditional discrete cosine transform (DCT), depending on prediction direction and boundary information. The ADST is first compared with DCT in terms of coding gain under ideal model conditions and is demonstrated to provide significantly improved compression efficiency. The proposed adaptive prediction and transform scheme is then implemented within the H.264/AVC intra-mode framework and is experimentally shown to significantly outperform the standard intra coding mode. As an added benefit, it achieves substantial reduction in blocking artifacts due to the fact that the transform now adapts to the statistics of block edges. An integer version of this ADST is also proposed.

  18. Significance of telemedicine for video image transmission of endoscopic retrograde cholangiopancreatography and endoscopic ultrasonography procedures.

    Science.gov (United States)

    Shimizu, Shuji; Itaba, Soichi; Yada, Shinichiro; Takahata, Shunichi; Nakashima, Naoki; Okamura, Koji; Rerknimitr, Rungsun; Akaraviputh, Thawatchai; Lu, Xinghua; Tanaka, Masao

    2011-05-01

    With the rapid and marked progress in gastrointestinal endoscopy, the education of doctors in many new diagnostic and therapeutic procedures is of increasing importance. Telecommunications (telemedicine) is very useful and cost-effective for doctors' continuing exposure to advanced skills, including those needed for hepato-pancreato-biliary diseases. Nevertheless, telemedicine in endoscopy has not yet gained much popularity. We have successfully established a new system which solves the problems of conventional ones, namely poor streaming images and the need for special expensive teleconferencing equipment. The digital video transport system, free software that transforms digital video signals directly into Internet Protocol without any analog conversion, was installed on a personal computer using a network with as much as 30 Mbps per channel, thereby providing more than 200 times greater information volume than the conventional system. Kyushu University Hospital in Japan was linked internationally to worldwide academic networks, using security software to protect patients' privacy. Of the 188 telecommunications link-ups involving 108 institutions in 23 countries performed between February 2003 and August 2009, 55 events were endoscopy-related, 19 were live demonstrations, and 36 were gastrointestinal teleconferences with interactive discussions. The frame rate of the transmitted pictures was 30/s, thus preserving smooth high-quality streaming. This paper documents the first time that an advanced tele-endoscopy system has been established over such a wide area using academic high-volume networks, funded by the various governments, and which is now available all over the world. The benefits of a network dedicated to research and education have barely been recognized in the medical community. We believe our cutting-edge system will be a milestone in endoscopy and will improve the quality of gastrointestinal education, especially with respect to endoscopic retrograde

  19. 13 point video tape quality guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to view how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.

  20. Wearable Brain Imaging with Multi-Modal Physiological Recording.

    Science.gov (United States)

    Strangman, Gary E; Ivkovic, Vladimir; Zhang, Quan

    2017-07-13

    The brain is a central component of cognitive and physical human performance. Measures including functional brain activation, cerebral perfusion, cerebral oxygenation, evoked electrical responses, and resting hemodynamic and electrical activity are all related to, or can predict health status or performance decrements. However, measuring brain physiology typically requires large, stationary machines that are not suitable for mobile or self-monitoring. Moreover, when individuals are ambulatory, systemic physiological fluctuations-e.g., in heart rate, blood pressure, skin perfusion and more-can interfere with non-invasive brain measurements. In efforts to address the physiological monitoring and performance assessment needs for astronauts during spaceflight, we have developed easy-to-use, wearable prototypes- NINscan, for near-infrared scanning-that can collect synchronized multi-modal physiology data, including hemodynamic deep-tissue imaging (including brain and muscles), electroencephalography, electrocardiography, electromyography, electrooculography, accelerometry, gyroscopy, pressure, respiration and temperature measurements. Given their self-contained and portable nature, these devices can be deployed in a much broader range of settings-including austere environments-thereby enabling a wider range of novel medical and research physiology applications. We review these, including high-altitude assessments, self-deployable multi-modal e.g., (polysomnographic) recordings in remote or low-resource environments, fluid shifts in variable-gravity or spaceflight analog environments, intra-cranial brain motion during high-impact sports, and long-duration monitoring for clinical symptom-capture in various clinical conditions. In addition to further enhancing sensitivity and miniaturization, advanced computational algorithms could help support real-time feedback and alerts regarding performance and health. Copyright © 2017, Journal of Applied Physiology.

  1. Endoscopic trimodal imaging detects colonic neoplasia as well as standard video endoscopy.

    Science.gov (United States)

    Kuiper, Teaco; van den Broek, Frank J C; Naber, Anton H; van Soest, Ellert J; Scholten, Pieter; Mallant-Hent, Rosalie Ch; van den Brande, Jan; Jansen, Jeroen M; van Oijen, Arnoud H A M; Marsman, Willem A; Bergman, Jacques J G H M; Fockens, Paul; Dekker, Evelien

    2011-06-01

    Endoscopic trimodal imaging (ETMI) is a novel endoscopic technique that combines high-resolution endoscopy (HRE), autofluorescence imaging (AFI), and narrow-band imaging (NBI) that has only been studied in academic settings. We performed a randomized, controlled trial in a nonacademic setting to compare ETMI with standard video endoscopy (SVE) in the detection and differentiation of colorectal lesions. The study included 234 patients scheduled to receive colonoscopy who were randomly assigned to undergo a colonoscopy in tandem with either ETMI or SVE. In the ETMI group (n=118), first examination was performed using HRE, followed by AFI. In the other group, both examinations were performed using SVE (n=116). In the ETMI group, detected lesions were differentiated using AFI and NBI. In the ETMI group, 87 adenomas were detected in the first examination (with HRE), and then 34 adenomas were detected during second inspection (with AFI). In the SVE group, 79 adenomas were detected during the first inspection, and then 33 adenomas were detected during the second inspection. Adenoma detection rates did not differ significantly between the 2 groups (ETMI: 1.03 vs SVE: 0.97, P=.360). The adenoma miss-rate was 29% for HRE and 28% for SVE. The sensitivity, specificity, and accuracy of NBI in differentiating adenomas from nonadenomatous lesions were 87%, 63%, and 75%, respectively; corresponding values for AFI were 90%, 37%, and 62%, respectively. In a nonacademic setting, ETMI did not improve the detection rate for adenomas compared with SVE. NBI and AFI each differentiated colonic lesions with high levels of sensitivity but low levels of specificity. Copyright © 2011 AGA Institute. Published by Elsevier Inc. All rights reserved.

  2. Real-time video imaging of gas plumes using a DMD-enabled full-frame programmable spectral filter

    Science.gov (United States)

    Graff, David L.; Love, Steven P.

    2016-02-01

    Programmable spectral filters based on digital micromirror devices (DMDs) are typically restricted to imaging a 1D line across a scene, analogous to conventional "push-broom scanning" hyperspectral imagers. In previous work, however, we demonstrated that, by placing the diffraction grating at a telecentric image plane rather than at the more conventional location in collimated space, a spectral plane can be created at which light from the entire 2D scene focuses to a unique location for each wavelength. A DMD placed at this spectral plane can then spectrally manipulate an entire 2D image at once, enabling programmable matched filters to be applied to real-time video imaging. We have adapted this concept to imaging rapidly evolving gas plumes. We have constructed a high spectral resolution programmable spectral imager operating in the shortwave infrared region, capable of resolving the rotational-vibrational line structure of several gases at sub-nm spectral resolution. This ability to resolve the detailed gas-phase line structure enables implementation of highly selective filters that unambiguously separate the gas spectrum from background spectral clutter. On-line and between-line multi-band spectral filters, with bands individually weighted using the DMD's duty-cycle-based grayscale capability, are alternately uploaded to the DMD, the resulting images differenced, and the result displayed in real time at rates of several frames per second to produce real-time video of the turbulent motion of the gas plume.

  3. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    Science.gov (United States)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  4. Three-dimensional video presentation of microsurgery by the cross-eyed viewing method using a high-definition video system.

    Science.gov (United States)

    Terakawa, Yuzo; Ishibashi, Kenichi; Goto, Takeo; Ohata, Kenji

    2011-01-01

    Three-dimensional (3-D) video recording of microsurgery is a more promising tool for presentation and education of microsurgery than conventional two-dimensional video systems, but has not been widely adopted partly because 3-D image processing of previous 3-D video systems is complicated and observers without optical devices cannot visualize the 3-D image. A new technical development for 3-D video presentation of microsurgery is described. Microsurgery is recorded with a microscope equipped with a single high-definition (HD) video camera. This 3-D video system records the right- and left-eye views of the microscope simultaneously as single HD data with the use of a 3-D camera adapter: the right- and left-eye views of the microscope are displayed separately on the right and left sides, respectively. The operation video is then edited with video editing software so that the right-eye view is displayed on the left side and left-eye view is displayed on the right side. Consequently, a 3-D video of microsurgery can be created by viewing the edited video by the cross-eyed stereogram viewing method without optical devices. The 3-D microsurgical video provides a more accurate view, especially with regard to depth, and a better understanding of microsurgical anatomy. Although several issues are yet to be addressed, this 3-D video system is a useful method of recording and presenting microsurgery for 3-D viewing with currently available equipment, without optical devices.

  5. Video visual analytics

    OpenAIRE

    Höferlin, Markus Johannes

    2013-01-01

    The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material...

  6. The anxiogenic video-recorded Stroop Color-Word Test: psychological and physiological alterations and effects of diazepam.

    Science.gov (United States)

    Teixeira-Silva, Flavia; Prado, Gabriela Bordini; Ribeiro, Lídia Christine Goulart; Leite, José Roberto

    2004-09-15

    From among the few human experimental models that can be used to predict the clinical activity of new anxiolytic drugs, the video-recorded Stroop Color-Word Test (VRSCWT), which uses subjective scales to evaluate anxious states, is notable for its simplicity. However, considering that the choice of treatment for anxiety disorders is heavily dependent on the level of somatic symptomatology, a quantitative evaluation of the physiological alterations elicited by the anxiogenic situation of the VRSCWT would also be of great interest. In the present study, 36 healthy male and female volunteers were submitted to either the VRSCWT or to a nonanxiogenic test. The results showed that, as well as a sensation of anxiety, the VRSCWT elicited increases in heart rate and gastrocnemius tension. Subsequently, a further 48 healthy men and women were randomly assigned to three treatments: placebo, 5 and 10 mg of diazepam, and were submitted to the VRSCWT. The results showed that in men, diazepam blocked the feeling of anxiety elicited by the test, although it did not prevent the physiological alterations, while in women, there was no response to the anxiolytic action of the drug. Taken as a whole, these results suggest that the VRSCWT is an efficient method of inducing anxiety experimentally. It is able to elicit observable psychological and physiological alterations and can detect the blocking, by an anxiolytic, of the feelings of anxiety in healthy men. Furthermore, the results suggest that the neural pathways for the control of the psychological and physiological manifestations of anxiety may be separate. This study also draws attention to the fact that gender is an important variable in the evaluation of anxiolytic drugs.

  7. One decade of imaging precipitation measurement by 2D-video-distrometer

    Directory of Open Access Journals (Sweden)

    M. Schönhuber

    2007-01-01

    Full Text Available The 2D-Video-Distrometer (2DVD is a ground-based point-monitoring precipitation gauge. From each particle reaching the measuring area front and side contours as well as fall velocity and precise time stamp are recorded. In 1991 the 2DVD development has been started to clarify discrepancies found when comparing weather radar data analyses with literature models. Then being manufactured in a small scale series the first 2DVD delivery took place in 1996, 10 years back from now. An overview on present 2DVD features is given, and it is presented how the instrument was continuously improved in the past ten years. Scientific merits of 2DVD measurements are explained, including drop size readings without upper limit, drop shape and orientation angle information, contours of solid and melting particles, and an independent measurement of particles' fall velocity also in mixed phase events. Plans for a next generation instrument are described, by enhanced user-friendliness the unique data type shall be opened to a wider user community.

  8. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  9. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  10. Prediction of foal carcass composition and wholesale cut yields by using video image analysis.

    Science.gov (United States)

    Lorenzo, J M; Guedes, C M; Agregán, R; Sarriés, M V; Franco, D; Silva, S R

    2018-01-01

    This work represents the first contribution for the application of the video image analysis (VIA) technology in predicting lean meat and fat composition in the equine species. Images of left sides of the carcass (n=42) were captured from the dorsal, lateral and medial views using a high-resolution digital camera. A total of 41 measurements (angles, lengths, widths and areas) were obtained by VIA. The variation of percentage of lean meat obtained from the forequarter (FQ) and hindquarter (HQ) carcass ranged between 5.86% and 7.83%. However, the percentage of fat (FAT) obtained from the FQ and HQ carcass presented a higher variation (CV between 41.34% and 44.58%). By combining different measurements and using prediction models with cold carcass weight (CCW) and VIA measurement the coefficient of determination (k-fold-R 2) were 0.458 and 0.532 for FQ and HQ, respectively. On the other hand, employing the most comprehensive model (CCW plus all VIA measurements), the k-fold-R 2 increased from 0.494 to 0.887 and 0.513 to 0.878 with respect to the simplest model (only with CCW), while precision increased with the reduction in the root mean square error (2.958 to 0.947 and 1.841 to 0.787) for the hindquarter fat and lean percentage, respectively. With CCW plus VIA measurements is possible to explain the wholesale value cuts yield variation (k-fold-R 2 between 0.533 and 0.889). Overall, the VIA technology performed in the present study could be considered as an accurate method to assess the horse carcass composition which could have a role in breeding programmes and research studies to assist in the development of a value-based marketing system for horse carcass.

  11. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    video sequences. For the video sequences, different filters are applied to luminance (Y) and chrominance (U,V) components. The performance of the proposed method has been compared against several other methods by using different objective quality metrics and a subjective comparison study. Both objective...

  12. VideoSAR collections to image underground chemical explosion surface phenomena

    Science.gov (United States)

    Yocky, David A.; Calloway, Terry M.; Wahl, Daniel E.

    2017-05-01

    Fully-polarimetric X-band (9.6 GHz center frequency) VideoSAR with 0.125-meter ground resolution flew collections before, during, and after the fifth Source Physics Experiment (SPE-5) underground chemical explosion. We generate and exploit synthetic aperture RADAR (SAR) and VideoSAR products to characterize surface effects caused by the underground explosion. To our knowledge, this has never been done. Exploited VideoSAR products are "movies" of coherence maps, phase-difference maps, and magnitude imagery. These movies show two-dimensional, time-varying surface movement. However, objects located on the SPE pad created unwanted, vibrating signatures during the event which made registration and coherent processing more difficult. Nevertheless, there is evidence that dynamic changes are captured by VideoSAR during the event. VideoSAR provides a unique, coherent, time-varying measure of surface expression of an underground chemical explosion.

  13. Short term exposure to attractive and muscular singers in music video clips negatively affects men's body image and mood.

    Science.gov (United States)

    Mulgrew, K E; Volcevski-Kostas, D

    2012-09-01

    Viewing idealized images has been shown to reduce men's body satisfaction; however no research has examined the impact of music video clips. This was the first study to examine the effects of exposure to muscular images in music clips on men's body image, mood and cognitions. Ninety men viewed 5 min of clips containing scenery, muscular or average-looking singers, and completed pre- and posttest measures of mood and body image. Appearance schema activation was also measured. Men exposed to the muscular clips showed poorer posttest levels of anger, body and muscle tone satisfaction compared to men exposed to the scenery or average clips. No evidence of schema activation was found, although potential problems with the measure are noted. These preliminary findings suggest that even short term exposure to music clips can produce negative effects on men's body image and mood. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. The Effect of Rubric-Guided, Focused, Personalized Coaching Sessions and Video-Recorded Presentations on Teaching Skills Among Fourth-Year Medical Students: A Pilot Study.

    Science.gov (United States)

    Tchekmedyian, Vatche; Shields, Helen M; Pelletier, Stephen R; Pazo, Valeria C

    2017-11-01

    As medical students become residents, teaching becomes an expected and integral responsibility. Yet, training-for-teaching opportunities are lacking. In 2014, the authors designed a pilot study using rubric-guided, focused, personalized coaching sessions and video-recorded presentations to improve student teaching skills among fourth-year students at Harvard Medical School. In 2014-2015, the authors recruited students from an elective on how to tutor preclinical students for the pilot, which consisted of four phases: a precoaching teaching presentation, a 30- to 45-minute coaching session, a postcoaching teaching presentation, and blinded reviewer ratings. Students' pre- and postcoaching presentations were video recorded. Using a scoring rubric for 15 teaching skills, students rated their pre- and postcoaching videos. Blinded reviewers also rated the pre- and postcoaching presentations using the same rubric with an additional category to gauge their overall impression. Fourteen students completed all four phases of the pilot. Students' ratings demonstrated statistically significant improvement in several teaching skills, including presentation content (P rubric, using coaching in different teaching settings, addressing the interventions' generalizability, training coaches, and performing additional evaluations.

  15. Disembodied perspective: third-person images in GoPro videos

    National Research Council Canada - National Science Library

    Bédard, Philippe

    2015-01-01

    A technical analysis of GoPro videos, focusing on the production of a third-person perspective created when the camera is turned back on the user, and the sense of disorientation that results for the spectator...

  16. Estimating age ratios and size of pacific walrus herds on coastal haulouts using video imaging.

    Directory of Open Access Journals (Sweden)

    Daniel H Monson

    Full Text Available During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance to classify the sex and age of walruses hauled out on Alaska beaches in 2010-2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m(2 (std. err. = 0.02, herd size ranged from 8,300 to 19,400 (CV 0.03-0.06 and we documented ∼30,000 animals along ∼1 km of beach in 2011. Within the herds, dependent walruses (0-2 yr-olds tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.

  17. Estimating age ratios and size of Pacific walrus herds on coastal haulouts using video imaging

    Science.gov (United States)

    Monson, Daniel H.; Udevitz, Mark S.; Jay, Chadwick V.

    2013-01-01

    During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance) to classify the sex and age of walruses hauled out on Alaska beaches in 2010–2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m2 (std. err. = 0.02), herd size ranged from 8,300 to 19,400 (CV 0.03–0.06) and we documented ~30,000 animals along ~1 km of beach in 2011. Within the herds, dependent walruses (0–2 yr-olds) tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying) will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.

  18. Estimating age ratios and size of pacific walrus herds on coastal haulouts using video imaging.

    Science.gov (United States)

    Monson, Daniel H; Udevitz, Mark S; Jay, Chadwick V

    2013-01-01

    During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance) to classify the sex and age of walruses hauled out on Alaska beaches in 2010-2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m(2) (std. err. = 0.02), herd size ranged from 8,300 to 19,400 (CV 0.03-0.06) and we documented ∼30,000 animals along ∼1 km of beach in 2011. Within the herds, dependent walruses (0-2 yr-olds) tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying) will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.

  19. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    Science.gov (United States)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  20. Ball lightning observation: an objective video-camera analysis report

    OpenAIRE

    Sello, Stefano; Viviani, Paolo; Paganini, Enrico

    2011-01-01

    In this paper we describe a video-camera recording of a (probable) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization. The results strongly support the BL nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible BL event for further analyses. Some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper.

  1. Exchanging digital video of laryngeal examinations.

    Science.gov (United States)

    Crump, John M; Deutsch, Thomas

    2004-03-01

    Laryngeal examinations, especially stroboscopic examinations, are increasingly recorded using digital video formats on computer media, rather than using analog formats on videotape. It would be useful to share these examinations with other medical professionals in formats that would facilitate reliable and high-quality playback on a personal computer by the recipients. Unfortunately, a personal computer is not well designed for reliable presentation of artifact-free video. It is particularly important that laryngeal video play without artifacts of motion or color because these are often the characteristics of greatest clinical interest. With proper tools and procedures, and with reasonable compromises in image resolution and the duration of the examination, digital video of laryngeal examinations can be reliably exchanged. However, the tools, procedures, and formats for recording, converting to another digital format ("transcoding"), communicating, copying, and playing digital video with a personal computer are not familiar to most medical professionals. Some understanding of digital video and the tools available is required of those wanting to exchange digital video. Best results are achieved by recording to a digital format best suited for recording (such as MJPEG or DV),judiciously selecting a segment of the recording for sharing, and converting to a format suited to distribution (such as MPEG1 or MPEG2) using a medium suited to the situation (such as e-mail attachment, CD-ROM, a "clip" within a Microsoft PowerPoint presentation, or DVD-Video). If digital video is sent to a colleague, some guidance on playing files and using a PC media player is helpful.

  2. Real-time intravascular photoacoustic-ultrasound imaging of lipid-laden plaque at speed of video-rate level

    Science.gov (United States)

    Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin

    2017-03-01

    Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.

  3. The Effect of Theme Preference on Academic Word List Use: A Case for Smartphone Video Recording Feature

    Science.gov (United States)

    Gromik, Nicolas A.

    2017-01-01

    Sixty-seven Japanese English as a Second Language undergraduate learners completed one smartphone video production per week for 12 weeks, based on a teacher-selected theme. Designed as a case study for this specific context, data from students' oral performances was analyzed on a weekly basis for their use of the Academic Word List (AWL). A…

  4. Observing Observers: Using Video to Prompt and Record Reflections on Teachers' Pedagogies in Four Regions of Canada

    Science.gov (United States)

    Reid, David A; Simmt, Elaine; Savard, Annie; Suurtamm, Christine; Manuel, Dominic; Lin, Terry Wan Jung; Quigley, Brenna; Knipping, Christine

    2015-01-01

    Regional differences in performance in mathematics across Canada prompted us to conduct a comparative study of middle-school mathematics pedagogy in four regions. We built on the work of Tobin, using a theoretical framework derived from the work of Maturana. In this paper, we describe the use of video as part of the methodology used. We used…

  5. Robust real-time segmentation of images and videos using a smooth-spline snake-based algorithm.

    Science.gov (United States)

    Precioso, Frederic; Barlaud, Michel; Blu, Thierry; Unser, Michael

    2005-07-01

    This paper deals with fast image and video segmentation using active contours. Region-based active contours using level sets are powerful techniques for video segmentation, but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in to highly reduce the computational cost, but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence, real-time processing for moving objects segmentation is preserved.

  6. Electromyography-based seizure detector: Preliminary results comparing a generalized tonic-clonic seizure detection algorithm to video-EEG recordings.

    Science.gov (United States)

    Szabó, Charles Ákos; Morgan, Lola C; Karkar, Kameel M; Leary, Linda D; Lie, Octavian V; Girouard, Michael; Cavazos, José E

    2015-09-01

    Automatic detection of generalized tonic-clonic seizures (GTCS) will facilitate patient monitoring and early intervention to prevent comorbidities, recurrent seizures, or death. Brain Sentinel (San Antonio, Texas, USA) developed a seizure-detection algorithm evaluating surface electromyography (sEMG) signals during GTCS. This study aims to validate the seizure-detection algorithm using inpatient video-electroencephalography (EEG) monitoring. sEMG was recorded unilaterally from the biceps/triceps muscles in 33 patients (17white/16 male) with a mean age of 40 (range 14-64) years who were admitted for video-EEG monitoring. Maximum voluntary biceps contraction was measured in each patient to set up the baseline physiologic muscle threshold. The raw EMG signal was recorded using conventional amplifiers, sampling at 1,024 Hz and filtered with a 60 Hz noise detection algorithm before it was processed with three band-pass filters at pass frequencies of 3-40, 130-240, and 300-400 Hz. A seizure-detection algorithm utilizing Hotelling's T-squared power analysis of compound muscle action potentials was used to identify GTCS and correlated with video-EEG recordings. In 1,399 h of continuous recording, there were 196 epileptic seizures (21 GTCS, 96 myoclonic, 28 tonic, 12 absence, and 42 focal seizures with or without loss of awareness) and 4 nonepileptic spells. During retrospective, offline evaluation of sEMG from the biceps alone, the algorithm detected 20 GTCS (95%) in 11 patients, averaging within 20 s of electroclinical onset of generalized tonic activity, as identified by video-EEG monitoring. Only one false-positive detection occurred during the postictal period following a GTCS, but false alarms were not triggered by other seizure types or spells. Brain Sentinel's seizure detection algorithm demonstrated excellent sensitivity and specificity for identifying GTCS recorded in an epilepsy monitoring unit. Further studies are needed in larger patient groups, including

  7. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    Science.gov (United States)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video

  8. Real-Depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications

    Science.gov (United States)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.

  9. MGN V RDRS DERIVED MOSAIC IMAGE DATA RECORD FULL RES V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set contains the Magellan Full-resolution Mosaic Image Data Records (F-MIDR) which consists of SAR mosaics generated from F-BIDRs (i.e., with 75 meters /...

  10. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Mask Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains a high quality Environmental Data Record (EDR) of cloud masks from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard...

  11. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Aerosol Detection Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of suspended matter from the Visible Infrared Imaging Radiometer Suite (VIIRS)...

  12. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Snow Cover Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of snow cover from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument...

  13. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Sensor Data Record (SDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sensor Data Records (SDRs), or Level 1b data, from the Visible Infrared Imaging Radiometer Suite (VIIRS) are the calibrated and geolocated radiance and reflectance...

  14. A system for endobronchial video analysis

    Science.gov (United States)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  15. Face identification in videos from mobile cameras

    NARCIS (Netherlands)

    Mu, Meiru; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face

  16. Digital Image Processing Of Arterial Thrombi Images, Recorded By Light Transmission

    Science.gov (United States)

    Nyssen, Marc; Blockeel, Erik; Bourgain, Rene

    1986-05-01

    For several years, the formation and evolution of thrombi in small arteries of rats has been quantitatively studied at the Laboratory of Physiology and Physiopathology at the V.U.B. Global size parameters can be determined by projecting the image of a small arterial segment onto photosensitive cells. The transmitted light intensity is a measure for the thrombotic phenomenon. This unique method permitted extensive in vivo study of the platelet-vessel wall interaction and local thrombosis. Now, a further development has emerged with the aim to improve the resolution of these measurements in order to get information on texture and form of the thrombotic mass at any stage of its evolution. Therefore a thorough understanding of how light propagates through non hemolized blood was essential. For this purpose, the Medical Informatics department developed a system to record and process digital images of the thrombotic phenomenon. For the processing and attempt to reconstruct the thrombi, a model describing the light transmission in a dispersive medium such as flowing blood had to be worked out. Application of results from Twersky's multiple scattering theory, combined with appropriate border conditions and parameter values was attempted. In the particular situation we studied, the dispersive properties of the flowing blood were found to be highly anisotropic. An explanation for this phenomenon could be given by considering the alignment of red blood cells in the blood flow. In order to explain the measured intensity profiles, we had to postulate alignment in the plane perpendicular to the flow as well. The theoretical predictions are in good agreement with the experimental values if we assume almost perfect alignment of the erythrocytes such that their short axes are pointing in the direction of the center of the artery. Conclusive evidence of the interaction between local flow properties and light transmission could be found by observing arteries with perturbated flow.

  17. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  18. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  19. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  20. Sleep disorders in children with Attention-Deficit/Hyperactivity Disorder (ADHD) recorded overnight by video-polysomnography.

    Science.gov (United States)

    Silvestri, Rosalia; Gagliano, Antonella; Aricò, Irene; Calarese, Tiziana; Cedro, Clemente; Bruni, Oliviero; Condurso, Rosaria; Germanò, Eva; Gervasi, Giuseppe; Siracusano, Rosamaria; Vita, Giuseppe; Bramanti, Placido

    2009-12-01

    To outline specific sleep disturbances in different clinical subsets of Attention Deficit/Hyperactivity Disorder (ADHD) and to confirm, by means of nocturnal video-polysomnography (video-PSG), a variety of sleep disorders in ADHD besides the classically described periodic leg movement disorder (PLMD), restless legs syndrome (RLS) and sleep related breathing disorder (SRBD). Fifty-five ADHD children (47 M, 8F; mean age=8.9 y) were included: 16 had Inattentive and 39 Hyperactive/Impulsive or Combined ADHD subtype. Behavior assessment by Conners and SNAP-IV Scales, a structured sleep interview and a nocturnal video-PSG were administered. Most children/parents reported disturbed, fragmentary sleep at night; complaints were motor restlessness (50%), sleep walking (47.6%), night terrors (38%), confusional arousals (28.5%), snoring (21.4%), and leg discomfort at night associated with RLS (11.9%). There is a significant difference (p value <0.05 or <0.001) in almost all the studied sleep variables between ADHD children and controls. International RLS Rating Scale scoring, Periodic Limb Movements during Sleep (PLMS) and Wake (PLMW) indexes, hyperactivity and opposition scores and ADHD subtype appear related. Different sleep disorders seem to address specific ADHD phenotypes and correlate with severity of symptoms as in sleep related movement disorders occurring in Hyperactive/Impulsive and Combined ADHD subtypes. Besides, an abnormality of the arousal process in slow wave sleep with consequent abnormal prevalence of disorders of arousal possibly enhanced by SRBD has also been detected in 52% of our sample. This study underlines the opportunity to propose and promote the inclusion of sleep studies, possibly by video-PSG, as part of the diagnostic screening for ADHD. This strategy could address the diagnosis and treatment of different specific ADHD phenotypic expressions that might be relevant to children's symptoms and contribute to ADHD severity.

  1. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    Science.gov (United States)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported

  2. A Physical Activity Reference Data-Set Recorded from Older Adults Using Body-Worn Inertial Sensors and Video Technology—The ADAPT Study Data-Set

    Science.gov (United States)

    Bourke, Alan Kevin; Ihlen, Espen Alexander F.; Bergquist, Ronny; Wik, Per Bendik; Vereijken, Beatrix; Helbostad, Jorunn L.

    2017-01-01

    Physical activity monitoring algorithms are often developed using conditions that do not represent real-life activities, not developed using the target population, or not labelled to a high enough resolution to capture the true detail of human movement. We have designed a semi-structured supervised laboratory-based activity protocol and an unsupervised free-living activity protocol and recorded 20 older adults performing both protocols while wearing up to 12 body-worn sensors. Subjects’ movements were recorded using synchronised cameras (≥25 fps), both deployed in a laboratory environment to capture the in-lab portion of the protocol and a body-worn camera for out-of-lab activities. Video labelling of the subjects’ movements was performed by five raters using 11 different category labels. The overall level of agreement was high (percentage of agreement >90.05%, and Cohen’s Kappa, corrected kappa, Krippendorff’s alpha and Fleiss’ kappa >0.86). A total of 43.92 h of activities were recorded, including 9.52 h of in-lab and 34.41 h of out-of-lab activities. A total of 88.37% and 152.01% of planned transitions were recorded during the in-lab and out-of-lab scenarios, respectively. This study has produced the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate (≥25 fps) video labelled data recorded in a free-living environment from older adults living independently. This dataset is suitable for validation of existing activity classification systems and development of new activity classification algorithms. PMID:28287449

  3. A Physical Activity Reference Data-Set Recorded from Older Adults Using Body-Worn Inertial Sensors and Video Technology-The ADAPT Study Data-Set.

    Science.gov (United States)

    Bourke, Alan Kevin; Ihlen, Espen Alexander F; Bergquist, Ronny; Wik, Per Bendik; Vereijken, Beatrix; Helbostad, Jorunn L

    2017-03-10

    Physical activity monitoring algorithms are often developed using conditions that do not represent real-life activities, not developed using the target population, or not labelled to a high enough resolution to capture the true detail of human movement. We have designed a semi-structured supervised laboratory-based activity protocol and an unsupervised free-living activity protocol and recorded 20 older adults performing both protocols while wearing up to 12 body-worn sensors. Subjects' movements were recorded using synchronised cameras (≥25 fps), both deployed in a laboratory environment to capture the in-lab portion of the protocol and a body-worn camera for out-of-lab activities. Video labelling of the subjects' movements was performed by five raters using 11 different category labels. The overall level of agreement was high (percentage of agreement >90.05%, and Cohen's Kappa, corrected kappa, Krippendorff's alpha and Fleiss' kappa >0.86). A total of 43.92 h of activities were recorded, including 9.52 h of in-lab and 34.41 h of out-of-lab activities. A total of 88.37% and 152.01% of planned transitions were recorded during the in-lab and out-of-lab scenarios, respectively. This study has produced the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate (≥25 fps) video labelled data recorded in a free-living environment from older adults living independently. This dataset is suitable for validation of existing activity classification systems and development of new activity classification algorithms.

  4. A Physical Activity Reference Data-Set Recorded from Older Adults Using Body-Worn Inertial Sensors and Video Technology—The ADAPT Study Data-Set

    Directory of Open Access Journals (Sweden)

    Alan Kevin Bourke

    2017-03-01

    Full Text Available Physical activity monitoring algorithms are often developed using conditions that do not represent real-life activities, not developed using the target population, or not labelled to a high enough resolution to capture the true detail of human movement. We have designed a semi-structured supervised laboratory-based activity protocol and an unsupervised free-living activity protocol and recorded 20 older adults performing both protocols while wearing up to 12 body-worn sensors. Subjects’ movements were recorded using synchronised cameras (≥25 fps, both deployed in a laboratory environment to capture the in-lab portion of the protocol and a body-worn camera for out-of-lab activities. Video labelling of the subjects’ movements was performed by five raters using 11 different category labels. The overall level of agreement was high (percentage of agreement >90.05%, and Cohen’s Kappa, corrected kappa, Krippendorff’s alpha and Fleiss’ kappa >0.86. A total of 43.92 h of activities were recorded, including 9.52 h of in-lab and 34.41 h of out-of-lab activities. A total of 88.37% and 152.01% of planned transitions were recorded during the in-lab and out-of-lab scenarios, respectively. This study has produced the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate (≥25 fps video labelled data recorded in a free-living environment from older adults living independently. This dataset is suitable for validation of existing activity classification systems and development of new activity classification algorithms.

  5. Towards an automated analysis of video-microscopy images of fungal morphogenesis

    Directory of Open Access Journals (Sweden)

    Diéguez-Uribeondo, Javier

    2005-06-01

    Full Text Available Fungal morphogenesis is an exciting field of cell biology and several mathematical models have been developed to describe it. These models require experimental evidences to be corroborated and, therefore, there is a continuous search for new microscopy and image analysis techniques. In this work, we have used a Canny-edge-detector based technique to automate the generation of hyphal profiles and calculation of morphogenetic parameters such as diameter, elongation rates and hyphoid fitness. The results show that the data obtained with this technique are similar to published data generated with manualbased tracing techniques and that have been carried out on the same species or genus. Thus, we show that application of edge detector-based technique to hyphal growth represents an efficient and accurate method to study hyphal morphogenesis. This represents the first step towards an automated analysis of videomicroscopy images of fungal morphogenesis.La morfogénesis de los hongos es un área de estudio de gran relevancia en la biología celular y en la que se han desarrollado varios modelos matemáticos. Los modelos matemáticos de procesos biológicos precisan de pruebas experimentales que apoyen y corroboren las predicciones teóricas y, por este motivo, existe una búsqueda continua de nuevas técnicas de microscopía y análisis de imágenes para su aplicación en el estudio del crecimiento celular. En este trabajo hemos utilizado una técnica basada en un detector de contornos llamado “Canny-edge-detectorâ€� con el objetivo de automatizar la generación de perfiles de hifas y el cálculo de parámetros morfogenéticos, tales como: el diámetro, la velocidad de elongación y el ajuste con el perfil hifoide, es decir, el perfil teórico de las hifas de los hongos. Los resultados obtenidos son similares a los datos publicados a partir de técnicas manuales de trazado de contornos, generados en la misma especie y género. De esta manera

  6. A framework for the recognition of high-level surgical tasks from video images for cataract surgeries

    Science.gov (United States)

    Lalys, Florent; Riffaud, Laurent; Bouget, David; Jannin, Pierre

    2012-01-01

    The need for a better integration of the new generation of Computer-Assisted-Surgical (CAS) systems has been recently emphasized. One necessity to achieve this objective is to retrieve data from the Operating Room (OR) with different sensors, then to derive models from these data. Recently, the use of videos from cameras in the OR has demonstrated its efficiency. In this paper, we propose a framework to assist in the development of systems for the automatic recognition of high level surgical tasks using microscope videos analysis. We validated its use on cataract procedures. The idea is to combine state-of-the-art computer vision techniques with time series analysis. The first step of the framework consisted in the definition of several visual cues for extracting semantic information, therefore characterizing each frame of the video. Five different pieces of image-based classifiers were therefore implemented. A step of pupil segmentation was also applied for dedicated visual cue detection. Time series classification algorithms were then applied to model time-varying data. Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) were tested. This association combined the advantages of all methods for better understanding of the problem. The framework was finally validated through various studies. Six binary visual cues were chosen along with 12 phases to detect, obtaining accuracies of 94%. PMID:22203700

  7. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  8. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or "Just Entertainment"?

    Science.gov (United States)

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2007-01-01

    The aim of this study is to assess late adolescents' evaluations of and reasoning about gender stereotypes in video games. Female (n = 46) and male (n = 41) students, predominantly European American, with a mean age 19 years, are interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences…

  9. A Peer-Reviewed Instructional Video is as Effective as a Standard Recorded Didactic Lecture in Medical Trainees Performing Chest Tube Insertion: A Randomized Control Trial.

    Science.gov (United States)

    Saun, Tomas J; Odorizzi, Scott; Yeung, Celine; Johnson, Marjorie; Bandiera, Glen; Dev, Shelly P

    Online medical education resources are becoming an increasingly used modality and many studies have demonstrated their efficacy in procedural instruction. This study sought to determine whether a standardized online procedural video is as effective as a standard recorded didactic teaching session for chest tube insertion. A randomized control trial was conducted. Participants were taught how to insert a chest tube with either a recorded didactic teaching session, or a New England Journal of Medicine (NEJM) video. Participants filled out a questionnaire before and after performing the procedure on a cadaver, which was filmed and assessed by 2 blinded evaluators using a standardized tool. Western University, London, Ontario. Level of clinical care: institutional. A total of 30 fourth-year medical students from 2 graduating classes at the Schulich School of Medicine & Dentistry were screened for eligibility. Two students did not complete the study and were excluded. There were 13 students in the NEJM group, and 15 students in the didactic group. The NEJM group׳s average score was 45.2% (±9.56) on the prequestionnaire, 67.7% (±12.9) for the procedure, and 60.1% (±7.65) on the postquestionnaire. The didactic group׳s average score was 42.8% (±10.9) on the prequestionnaire, 73.7% (±9.90) for the procedure, and 46.5% (±7.46) on the postquestionnaire. There was no difference between the groups on the prequestionnaire (Δ + 2.4%; 95% CI: -5.16 to 9.99), or the procedure (Δ -6.0%; 95% CI: -14.6 to 2.65). The NEJM group had better scores on the postquestionnaire (Δ + 11.15%; 95% CI: 3.74-18.6). The NEJM video was as effective as video-recorded didactic training for teaching the knowledge and technical skills essential for chest tube insertion. Participants expressed high satisfaction with this modality. It may prove to be a helpful adjunct to standard instruction on the topic. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc

  10. Marine snow, zooplankton and thin layers: indications of a trophic link from small-scale sampling with the Video Plankton Recorder

    DEFF Research Database (Denmark)

    Möller, Klas O.; St. John, Michael; Temming, Axel

    2012-01-01

    sampling does not collect marine snow quantitatively and cannot resolve so-called thin layers in which this interaction occurs. Hence, field evidence for the importance of the marine snow−zooplankton link is scarce. Here we employed a Video Plankton Recorder (VPR) to quantify small-scale (metres) vertical...... distribution patterns of fragile marine snow aggregates and zooplankton in the Baltic Sea during late spring 2002. By using this non-invasive optical sampling technique we recorded a peak in copepod abundance (ca. 18 ind. l−1) associated with a pronounced thin layer (50 to 55 m) of marine snow (maximum...... to aggregates and demonstrating feeding behaviour, which also suggests a trophic interaction. Our observations highlight the potential significance of marine snow in marine ecosystems and its potential as a food resource for various trophic levels, from bacteria up to fish...

  11. Comparison between the IT-MAIS and MUSS questionnaires with video-recording for evaluation of children who may receive a cochlear implantation.

    Science.gov (United States)

    Pinto, Elaine Soares Monteiro; Lacerda, Cristina Broglia de Feitosa; Porto, Paulo Rogério Catanhede

    2008-01-01

    There is a great difficulty in determining earlier on which children would benefit or not from cochlear implants, especially because of their young age, the responses they give are very subtle. To compare results obtained through video-recording of the interactions of children who may receive a cochlear implant with the results obtained through evaluation protocols. Seven children, with an average age of 39.7 months, with profound hearing loss were selected for the study. IT-MAIS and MUSS questionnaires were given to their parents/guardians of these children and the results were compared with the observation of the video-recordings. It was possible to observe that the data is compatible with the auditory stages. However, the MUSS questionnaire data gathered during playful activities is very different . The questionnaire only takes into consideration the use of verbal language and therefore the majority of the evaluated children inevitably score low. Observing children play allows us to trace a better profile of linguistic behavior and aspects relative to language, that may presented differences in the questionnaire.

  12. The advantages of using photographs and video images in telephone consultations with a specialist in paediatric surgery

    Directory of Open Access Journals (Sweden)

    Ibrahim Akkoyun

    2012-01-01

    Full Text Available Background: The purpose of this study was to evaluate the advantages of a telephone consultation with a specialist in paediatric surgery after taking photographs and video images by a general practitioner for the diagnosis of some diseases. Materials and Methods: This was a prospective study of the reliability of paediatric surgery online consultation among specialists and general practitioners. Results: Of 26 general practitioners included in the study, 12 were working in the city and 14 were working in districts outside the city. A total of 41 pictures and 3 videos of 38 patients were sent and evaluated together with the medical history and clinical findings. These patients were diagnosed with umbilical granuloma (n = 6, physiological/pathological phimosis (n = 6, balanitis (n = 6, hydrocele (n = 6, umbilical hernia (n = 4, smegma cyst (n = 2, reductable inguinal hernia (n = 1, incarcerated inguinal hernia (n = 1, paraphimosis (n = 1, burried penis (n = 1, hypospadias (n = 1, epigastric hernia (n = 1, vulva synechia (n = 1, and rectal prolapse (n = 1. Twelve patients were asked to be referred urgently, but it was suggested that only two of these patients, who had paraphimosis and incarcerated inguinal hernia be referred in emergency conditions. It was decided that there was no need for the other ten patients to be referred to a specialist at night or at the weekend. All diagnoses were confirmed to be true, when all patients underwent examination in the pediatric surgery clinic in elective conditions. Conclusion: Evaluation of photographs and video images of a lesion together with medical history and clinical findings via a telephone consultation between a paediatric surgery specialist and a general practitioner provides a definitive diagnosis and prevents patients from being referred unnecessarily.

  13. Simultaneous Measurement of Neural Spike Recordings and Multi-Photon Calcium Imaging in Neuroblastoma Cells

    Directory of Open Access Journals (Sweden)

    Jeehyun Kim

    2012-11-01

    Full Text Available This paper proposes the design and implementation of a micro-electrode array (MEA for neuroblastoma cell culturing. It also explains the implementation of a multi-photon microscope (MPM customized for neuroblastoma cell excitation and imaging under ambient light. Electrical signal and fluorescence images were simultaneously acquired from the neuroblastoma cells on the MEA. MPM calcium images of the cultured neuroblastoma cell on the MEA are presented and also the neural activity was acquired through the MEA recording. A calcium green-1 (CG-1 dextran conjugate of 10,000 D molecular weight was used in this experiment for calcium imaging. This study also evaluated the calcium oscillations and neural spike recording of neuroblastoma cells in an epileptic condition. Based on our observation of neural spikes in neuroblastoma cells with our proposed imaging modality, we report that neuroblastoma cells can be an important model for epileptic activity studies.

  14. Compression of Video-Otoscope Images for Tele-Otology: A Pilot Study

    Science.gov (United States)

    2001-10-25

    algorithm used in image compression is the one developed by the Joint Picture Expert Group (JPEG), which has been deployed in almost all imaging ...recognised the image , nor go back to view the previous images . This was designed to minimise the affect of memory . After the assessments were tabulated...also have contributed such as the memory effect, or the experience of the assessor. V. CONCLUSION 1. Images can probably be compressed to about

  15. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or "Just Entertainment"?

    Science.gov (United States)

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2007-06-01

    The aim of this study was to assess adolescents' evaluations of, and reasoning about, gender stereotypes in video games. Female (N = 46) and male (N = 41), predominantly European-American, mean age = 19 years, were interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences of games on the players, and authority jurisdiction over 3 different types of games: games with negative male stereotypes, and games with negative female stereotypes, and gender-neutral games. Gender differences were found for how participants evaluated these games. Males were more likely than females to find stereotypes acceptable. Results are discussed in terms of social reasoning, video game playing, and gender differences.

  16. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or “Just Entertainment”?

    Science.gov (United States)

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2015-01-01

    The aim of this study was to assess adolescents' evaluations of, and reasoning about, gender stereotypes in video games. Female (N = 46) and male (N = 41), predominantly European-American, mean age = 19 years, were interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences of games on the players, and authority jurisdiction over 3 different types of games: games with negative male stereotypes, and games with negative female stereotypes, and gender-neutral games. Gender differences were found for how participants evaluated these games. Males were more likely than females to find stereotypes acceptable. Results are discussed in terms of social reasoning, video game playing, and gender differences. PMID:25722501

  17. Concurrent Calculations on Reconfigurable Logic Devices Applied to the Analysis of Video Images

    Directory of Open Access Journals (Sweden)

    Sergio R. Geninatti

    2010-01-01

    Full Text Available This paper presents the design and implementation on FPGA devices of an algorithm for computing similarities between neighboring frames in a video sequence using luminance information. By taking advantage of the well-known flexibility of Reconfigurable Logic Devices, we have designed a hardware implementation of the algorithm used in video segmentation and indexing. The experimental results show the tradeoff between concurrent sequential resources and the functional blocks needed to achieve maximum operational speed while achieving minimum silicon area usage. To evaluate system efficiency, we compare the performance of the hardware solution to that of calculations done via software using general-purpose processors with and without an SIMD instruction set.

  18. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or “Just Entertainment”?

    OpenAIRE

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2007-01-01

    The aim of this study was to assess adolescents' evaluations of, and reasoning about, gender stereotypes in video games. Female (N = 46) and male (N = 41), predominantly European-American, mean age = 19 years, were interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences of games on the players, and authority jurisdiction over 3 different types of games: games with negative male stereotypes, and games with negative female stereotyp...

  19. Photometric-Photogrammetric Analysis of Video Images of a Venting of Water from Space Shuttle Discovery

    Science.gov (United States)

    1990-06-15

    simulations), which are accompanied by a much less-dense cloud of subrnicron ice droplets produced when the evaporated/sublimed water gas overexpands and...Focus, pan and tilt angles, and angular field are controlled from the crew cabin with the aid of a monochrome video monitor. (Some of these cameras...ice particles when this gas has become overexpanded . 2) The angular spreads of the two types of particle are the same within experimental uncertainty

  20. A Brief Tool to Assess Image-Based Dietary Records and Guide Nutrition Counselling Among Pregnant Women: An Evaluation.

    Science.gov (United States)

    Ashman, Amy M; Collins, Clare E; Brown, Leanne J; Rae, Kym M; Rollo, Megan E

    2016-11-04

    Dietitians ideally should provide personally tailored nutrition advice to pregnant women. Provision is hampered by a lack of appropriate tools for nutrition assessment and counselling in practice settings. Smartphone technology, through the use of image-based dietary records, can address limitations of traditional methods of recording dietary intake. Feedback on these records can then be provided by the dietitian via smartphone. Efficacy and validity of these methods requires examination. The aims of the Australian Diet Bytes and Baby Bumps study, which used image-based dietary records and a purpose-built brief Selected Nutrient and Diet Quality (SNaQ) tool to provide tailored nutrition advice to pregnant women, were to assess relative validity of the SNaQ tool for analyzing dietary intake compared with nutrient analysis software, to describe the nutritional intake adequacy of pregnant participants, and to assess acceptability of dietary feedback via smartphone. Eligible women used a smartphone app to record everything they consumed over 3 nonconsecutive days. Records consisted of an image of the food or drink item placed next to a fiducial marker, with a voice or text description, or both, providing additional detail. We used the SNaQ tool to analyze participants' intake of daily food group servings and selected key micronutrients for pregnancy relative to Australian guideline recommendations. A visual reference guide consisting of images of foods and drinks in standard serving sizes assisted the dietitian with quantification. Feedback on participants' diets was provided via 2 methods: (1) a short video summary sent to participants' smartphones, and (2) a follow-up telephone consultation with a dietitian. Agreement between dietary intake assessment using the SNaQ tool and nutrient analysis software was evaluated using Spearman rank correlation and Cohen kappa. We enrolled 27 women (median age 28.8 years, 8 Indigenous Australians, 15 primiparas), of whom 25

  1. Head-motion-controlled video goggles: preliminary concept for an interactive laparoscopic image display (i-LID).

    Science.gov (United States)

    Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I

    2009-08-01

    Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD

  2. Video compression and DICOM proxies for remote viewing of DICOM images

    Science.gov (United States)

    Khorasani, Elahe; Sheinin, Vadim; Paulovicks, Brent; Jagmohan, Ashish

    2009-02-01

    Digital medical images are rapidly growing in size and volume. A typical study includes multiple image "slices." These images have a special format and a communication protocol referred to as DICOM (Digital Imaging Communications in Medicine). Storing, retrieving, and viewing these images are handled by DICOM-enabled systems. DICOM images are stored in central repository servers called PACS (Picture Archival and Communication Systems). Remote viewing stations are DICOM-enabled applications that can query the PACS servers and retrieve the DICOM images for viewing. Modern medical images are quite large, reaching as much as 1 GB per file. When the viewing station is connected to the PACS server via a high-bandwidth local LAN, downloading of the images is relatively efficient and does not cause significant wasted time for physicians. Problems arise when the viewing station is located in a remote facility that has a low-bandwidth link to the PACS server. If the link between the PACS and remote facility is in the range of 1 Mbit/sec, downloading medical images is very slow. To overcome this problem, medical images are compressed to reduce the size for transmission. This paper describes a method of compression that maintains diagnostic quality of images while significantly reducing the volume to be transmitted, without any change to the existing PACS servers and viewer software, and without requiring any change in the way doctors retrieve and view images today.

  3. The 15 March 2007 paroxysm of Stromboli: video-image analysis, and textural and compositional features of the erupted deposit

    Science.gov (United States)

    Andronico, Daniele; Taddeucci, Jacopo; Cristaldi, Antonio; Miraglia, Lucia; Scarlato, Piergiorgio; Gaeta, Mario

    2013-07-01

    On 15 March 2007, a paroxysmal event occurred within the crater terrace of Stromboli, in the Aeolian Islands (Italy). Infrared and visible video recordings from the monitoring network reveal that there was a succession of highly explosive pulses, lasting about 5 min, from at least four eruptive vents. Initially, brief jets with low apparent temperature were simultaneously erupted from the three main vent regions, becoming hotter and transitioning to bomb-rich fountaining that lasted for 14 s. Field surveys estimate the corresponding fallout deposit to have a mass of ˜1.9 × 107 kg that, coupled with the video information on eruption duration, provides a mean mass eruption rate of ˜5.4 × 105 kg/s. Textural and chemical analyses of the erupted tephra reveal unexpected complexity, with grain-size bimodality in the samples associated with the different percentages of ash types (juvenile, lithics, and crystals) that reflects almost simultaneous deposition from multiple and evolving plumes. Juvenile glass chemistry ranges from a gas-rich, low porphyricity end member (typical of other paroxysmal events) to a gas-poor high porphyricity one usually associated with low-intensity Strombolian explosions. Integration of our diverse data sets reveals that (1) the 2007 event was a paroxysmal explosion driven by a magma sharing common features with large-scale paroxysms as well as with "ordinary" Strombolian explosions; (2) initial vent opening by the release of a pressurized gas slug and subsequent rapid magma vesiculation and ejection, which were recorded both by the infrared camera and in the texture of fallout products; and (3) lesser paroxysmal events can be highly dynamic and produce surprisingly complex fallout deposits, which would be difficult to interpret from the geological record alone.

  4. Content-Based Indexing and Teaching Focus Mining for Lecture Videos

    Science.gov (United States)

    Lin, Yu-Tzu; Yen, Bai-Jang; Chang, Chia-Hu; Lee, Greg C.; Lin, Yu-Chih

    2010-01-01

    Purpose: The purpose of this paper is to propose an indexing and teaching focus mining system for lecture videos recorded in an unconstrained environment. Design/methodology/approach: By applying the proposed algorithms in this paper, the slide structure can be reconstructed by extracting slide images from the video. Instead of applying…

  5. Endoscopic Trimodal Imaging Detects Colonic Neoplasia as Well as Standard Video Endoscopy

    NARCIS (Netherlands)

    Kuiper, Teaco; van den Broek, Frank J. C.; Naber, Anton H.; van Soest, Ellert J.; Scholten, Pieter; Mallant-Hent, Rosalie Ch; van den Brande, Jan; Jansen, Jeroen M.; van Oijen, Arnoud H. A. M.; Marsman, Willem A.; Bergman, Jacques J. G. H. M.; Fockens, Paul; Dekker, Evelien

    2011-01-01

    BACKGROUND & AIMS: Endoscopic trimodal imaging (ETMI) is a novel endoscopic technique that combines high-resolution endoscopy (HRE), autofluorescence imaging (AFI), and narrow-band imaging (NBI) that has only been studied in academic settings. We performed a randomized, controlled trial in a

  6. High resolution, high frame rate video technology development plan and the near-term system conceptual design

    Science.gov (United States)

    Ziemke, Robert A.

    1990-01-01

    The objective of the High Resolution, High Frame Rate Video Technology (HHVT) development effort is to provide technology advancements to remove constraints on the amount of high speed, detailed optical data recorded and transmitted for microgravity science and application experiments. These advancements will enable the development of video systems capable of high resolution, high frame rate video data recording, processing, and transmission. Techniques such as multichannel image scan, video parameter tradeoff, and the use of dual recording media were identified as methods of making the most efficient use of the near-term technology.

  7. Modification of the Miyake-Apple technique for simultaneous anterior and posterior video imaging of wet laboratory-based corneal surgery.

    Science.gov (United States)

    Tan, Johnson C H; Meadows, Howard; Gupta, Aanchal; Yeung, Sonia N; Moloney, Gregory

    2014-03-01

    The aim of this study was to describe a modification of the Miyake-Apple posterior video analysis for the simultaneous visualization of the anterior and posterior corneal surfaces during wet laboratory-based deep anterior lamellar keratoplasty (DALK). A human donor corneoscleral button was affixed to a microscope slide and placed onto a custom-made mounting box. A big bubble DALK was performed on the cornea in the wet laboratory. An 11-diopter intraocular lens was positioned over the aperture of the back camera of an iPhone. This served to video record the posterior view of the corneoscleral button during the big bubble formation. An overhead operating microscope with an attached video camcorder recorded the anterior view during the surgery. The anterior and posterior views of the wet laboratory-based DALK surgery were simultaneously captured and edited using video editing software. The formation of the big bubble can be studied. This video recording camera system has the potential to act as a valuable research and teaching tool in corneal lamellar surgery, especially in the behavior of the big bubble formation in DALK.

  8. Efficient video panoramic image stitching based on an improved selection of Harris corners and a multiple-constraint corner matching.

    Directory of Open Access Journals (Sweden)

    Minchen Zhu

    Full Text Available Video panoramic image stitching is extremely time-consuming among other challenges. We present a new algorithm: (i Improved, self-adaptive selection of Harris corners. The successful stitching relies heavily on the accuracy of corner selection. We fragment each image into numerous regions and select corners within each region according to the normalized variance of region grayscales. Such a selection is self-adaptive and guarantees that corners are distributed proportional to region texture information. The possible clustering of corners is also avoided. (ii Multiple-constraint corner matching. The traditional Random Sample Consensus (RANSAC algorithm is inefficient, especially when handling a large number of images with similar features. We filter out many inappropriate corners according to their position information, and then generate candidate matching pairs based on grayscales of adjacent regions around corners. Finally we apply multiple constraints on every two pairs to remove incorrectly matched pairs. By a significantly reduced number of iterations needed in RANSAC, the stitching can be performed in a much more efficient manner. Experiments demonstrate that (i our corner matching is four times faster than normalized cross-correlation function (NCC rough match in RANSAC and (ii generated panoramas feature a smooth transition in overlapping image areas and satisfy real-time human visual requirements.

  9. Assessment of the Potential of UAV Video Image Analysis for Planning Irrigation Needs of Golf Courses

    Directory of Open Access Journals (Sweden)

    Alberto-Jesús Perea-Moreno

    2016-12-01

    Full Text Available Golf courses can be considered as precision agriculture, as being a playing surface, their appearance is of vital importance. Areas with good weather tend to have low rainfall. Therefore, the water management of golf courses in these climates is a crucial issue due to the high water demand of turfgrass. Golf courses are rapidly transitioning to reuse water, e.g., the municipalities in the USA are providing price incentives or mandate the use of reuse water for irrigation purposes; in Europe this is mandatory. So, knowing the turfgrass surfaces of a large area can help plan the treated sewage effluent needs. Recycled water is usually of poor quality, thus it is crucial to check the real turfgrass surface in order to be able to plan the global irrigation needs using this type of water. In this way, the irrigation of golf courses does not detract from the natural water resources of the area. The aim of this paper is to propose a new methodology for analysing geometric patterns of video data acquired from UAVs (Unmanned Aerial Vehicle using a new Hierarchical Temporal Memory (HTM algorithm. A case study concerning maintained turfgrass, especially for golf courses, has been developed. It shows very good results, better than 98% in the confusion matrix. The results obtained in this study represent a first step toward video imagery classification. In summary, technical progress in computing power and software has shown that video imagery is one of the most promising environmental data acquisition techniques available today. This rapid classification of turfgrass can play an important role for planning water management.

  10. 2011 Tohoku tsunami video and TLS based measurements: hydrographs, currents, inundation flow velocities, and ship tracks

    Science.gov (United States)

    Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Takeda, S.; Mohammed, F.; Skanavis, V.; Synolakis, C. E.; Takahashi, T.

    2012-12-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the coast of the Tohoku region caused catastrophic damage and loss of life in Japan. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided spontaneous spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami

  11. A New Distance Measure Based on Generalized Image Normalized Cross-Correlation for Robust Video Tracking and Image Recognition.

    Science.gov (United States)

    Nakhmani, Arie; Tannenbaum, Allen

    2013-02-01

    We propose two novel distance measures, normalized between 0 and 1, and based on normalized cross-correlation for image matching. These distance measures explicitly utilize the fact that for natural images there is a high correlation between spatially close pixels. Image matching is used in various computer vision tasks, and the requirements to the distance measure are application dependent. Image recognition applications require more shift and rotation robust measures. In contrast, registration and tracking applications require better localization and noise tolerance. In this paper, we explore different advantages of our distance measures, and compare them to other popular measures, including Normalized Cross-Correlation (NCC) and Image Euclidean Distance (IMED). We show which of the proposed measures is more appropriate for tracking, and which is appropriate for image recognition tasks.

  12. The effects of physique-salient and physique non-salient exercise videos on women's body image, self-presentational concerns, and exercise motivation.

    Science.gov (United States)

    Ginis, Kathleen A Martin; Prapavessis, Harry; Haase, Anne M

    2008-06-01

    This experiment examined the effects of exposure to physique-salient (PS) and physique non-salient (PNS) exercise videos and the moderating influence of perceived physique discrepancies, on body image, self-presentational concerns, and exercise motivation. Eighty inactive women (M age=26) exercised to a 30 min instructional exercise video. In the PS condition, the video instructor wore revealing attire that emphasized her thin and toned physique. In the PNS condition, she wore attire that concealed her physique. Participants completed pre- and post-exercise measures of body image, social physique anxiety (SPA) and self-presentational efficacy (SPE) and a post-exercise measure of exercise motivation and perceived discrepancies with the instructor's body. No main or moderated effects emerged for video condition. However, greater perceived negative discrepancies were associated with poorer post-exercise body satisfaction and body evaluations, and higher state SPA. There were no effects on SPE or motivation. Results suggest that exercise videos that elicit perceived negative discrepancies can be detrimental to women's body images.

  13. Unusual features of negative leaders' development in natural lightning, according to simultaneous records of current, electric field, luminosity, and high-speed video

    Science.gov (United States)

    Guimaraes, Miguel; Arcanjo, Marcelo; Murta Vale, Maria Helena; Visacro, Silverio

    2017-02-01

    The development of downward and upward leaders that formed two negative cloud-to-ground return strokes in natural lightning, spaced only about 200 µs apart and terminating on ground only a few hundred meters away, was monitored at Morro do Cachimbo Station, Brazil. The simultaneous records of current, close electric field, relative luminosity, and corresponding high-speed video frames (sampling rate of 20,000 frames per second) reveal that the initiation of the first return stroke interfered in the development of the second negative leader, leading it to an apparent continuous development before the attachment, without stepping, and at a regular two-dimensional speed. Based on the experimental data, the formation processes of the two return strokes are discussed, and plausible interpretations for their development are provided.

  14. Gastroesophageal Reflux and Body Movement in Infants: Investigations with Combined Impedance-pH and Synchronized Video Recording

    Directory of Open Access Journals (Sweden)

    Tobias G. Wenzl

    2011-01-01

    Full Text Available The aim of this paper was to investigate the temporal association of gastroesophageal reflux (GER and body movement in infants. GER were registered by combined impedance-pH, documentation of body movement was done by video. Videorecording time (Vt was divided into “resting time” and “movement time” and analyzed for occurrence of GER. Association was defined as movement 1 minute before/after the beginning of a GER. Statistical evaluation was by Fisher's exact test. In 15 infants, 341 GER were documented during Vt (86 hours. 336 GER (99% were associated with movement, only 5 episodes (1% occured during resting time. Movement was significantly associated with the occurrence of GER (<.0001. There is a strong temporal association between GER and body movement in infants. However, a clear distinction between cause and effect could not be made with the chosen study design. Combined impedance-pH has proven to be the ideal technique for this approach.

  15. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions.

    Science.gov (United States)

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.

  16. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... forms and through empirical examples, we present and discuss the video recording of sketching sessions, as well as development of video sketches by rethinking, redoing and editing the recorded sessions. The empirical data is based on workshop sessions with researchers and students from universities...... and university colleges and primary and secondary school teachers. As researchers, we have had different roles in these action research case studies where various video sketching techniques were applied.The analysis illustrates that video sketching can take many forms, and two common features are important...

  17. Use of 64 kbits/s digital channel for image transmission: Using low scan two-way video

    Science.gov (United States)

    Rahko, K.; Hongyan, L.; Kley, M.; Peuhkuri, M.; Rahko, M.

    1993-09-01

    At the seminar 'Competition in Telecommunications in Finland' on September 3rd, 1993, a test of two-way transferring an image by using 64 kbits/s digital channel was carried out. With the help of the Helsinki Telephone Company, a portrait was transferred to the lecture hall by using Vistacom Videophones, Nokia and Siemens ISDN exchange, as well as Nokia's and Siemens' user terminal equipment. It was shown on a screen through a video projector, so all visitors could see it. For human factors in telecommunications studies, every attendee was asked to give comments about the transferring quality. The report presents the results of the survey and a brief assessment of the technology.

  18. Activity Detection and Retrieval for Image and Video Data with Limited Training

    Science.gov (United States)

    2015-06-10

    Number of graduating undergraduates funded by a DoD funded Center of Excellence grant for Education , Research and Engineering: The number of...geometric snakes to segment the image into constant intensity regions. The Chan-Vese framework proposes to partition the image f()(x ∈  Ω ⊆ ℝ

  19. Extracting a Good Quality Frontal Face Image from a Low-Resolution Video Sequence

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2011-01-01

    Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images and on the......Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images...... and on the other hand facial analysis systems. The proposed system in this paper deals with exactly this problem. Our approach is to apply a reconstruction-based super-resolution algorithm. Such an algorithm, however, has two main problems: first, it requires relatively similar images with not too much noise...

  20. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  1. Target recognition with image/video understanding systems based on active vision principle and network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a larger system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. This mechanism provides a reliable recognition if the target is occluded or cannot be recognized. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in Network-Symbolic models and used for disambiguation of visual information. Network-Symbolic Transformations derive abstract structures, which allow for invariant recognition of an object as exemplar of a class. Active vision helps build consistent, unambiguous models. Such Image/Video Understanding Systems will be able reliably recognizing targets in real-world conditions.

  2. Optical image encoding based on digital holographic recording on polarization state of vector wave.

    Science.gov (United States)

    Lin, Chao; Shen, Xueju; Xu, Qinzu

    2013-10-01

    We propose and analyze a compact optical image encoder based on the principle of digital holographic recording on the polarization state of a vector wave. The optical architecture is a Mach-Zehnder interferometer with in-line digital holographic recording mechanism. The original image is represented by distinct polarization states of elliptically polarized light. This state of polarization distribution is scrambled and then recorded by a two-step digital polarization holography method with random phase distributed reference wave. Introduction of a rotation key in the object arm and phase keys in the reference arm can achieve the randomization of plaintext. Statistical property of cyphertext is analyzed from confusion and diffusion point of view. Fault tolerance and key sensitivity of the proposed approach are also investigated. A chosen plaintext attack on the proposed algorithm exhibits its high security level. Simulation results that support the theoretical analysis are presented.

  3. Anthropocentric Video Segmentation for Lecture Webcasts

    Directory of Open Access Journals (Sweden)

    Raul Rojas

    2008-03-01

    Full Text Available Many lecture recording and presentation systems transmit slides or chalkboard content along with a small video of the instructor. As a result, two areas of the screen are competing for the viewer's attention, causing the widely known split-attention effect. Face and body gestures, such as pointing, do not appear in the context of the slides or the board. To eliminate this problem, this article proposes to extract the lecturer from the video stream and paste his or her image onto the board or slide image. As a result, the lecturer acting in front of the board or slides becomes the center of attention. The entire lecture presentation becomes more human-centered. This article presents both an analysis of the underlying psychological problems and an explanation of signal processing techniques that are applied in a concrete system. The presented algorithm is able to extract and overlay the lecturer online and in real time at full video resolution.

  4. Structured and Collaborative Signal Models: Theory and Applications in Image, Video, and Audio Analysis

    Science.gov (United States)

    2013-01-01

    delivered a number of presentations at universities, including February Fourier Talks (FFT) at the Norbert Wiener Center, University of Maryland... voice separation from monaural recordings using robust low-rank modeling, International Society for Music In- formation Retrieval Conference, Porto

  5. Photoplethysmography Signal Analysis for Optimal Region-of-Interest Determination in Video Imaging on a Built-In Smartphone under Different Conditions

    Directory of Open Access Journals (Sweden)

    Yunyoung Nam

    2017-10-01

    Full Text Available Smartphones and tablets are widely used in medical fields, which can improve healthcare and reduce healthcare costs. Many medical applications for smartphones and tablets have already been developed and widely used by both health professionals and patients. Specifically, video recordings of fingertips made using a smartphone camera contain a pulsatile component caused by the cardiac pulse equivalent to that present in a photoplethysmographic signal. By performing peak detection on the pulsatile signal, it is possible to estimate a continuous heart rate and a respiratory rate. To estimate the heart rate and respiratory rate accurately, which pixel regions of the color bands give the most optimal signal quality should be investigated. In this paper, we investigate signal quality to determine the best signal quality by the largest amplitude values for three different smartphones under different conditions. We conducted several experiments to obtain reliable PPG signals and compared the PPG signal strength in the three color bands when the flashlight was both on and off. We also evaluated the intensity changes of PPG signals obtained from the smartphones with motion artifacts and fingertip pressure force. Furthermore, we have compared the PSNR of PPG signals of the full-size images with that of the region of interests (ROIs.

  6. Learning Trajectory for Transforming Teachers' Knowledge for Teaching Mathematics and Science with Digital Image and Video Technologies in an Online Learning Experience

    Science.gov (United States)

    Niess, Margaret L.; Gillow-Wiles, Henry

    2014-01-01

    This qualitative cross-case study explores the influence of a designed learning trajectory on transforming teachers' technological pedagogical content knowledge (TPACK) for teaching with digital image and video technologies. The TPACK Learning Trajectory embeds tasks with specific instructional strategies within a social metacognitive…

  7. Global adjustment for creating extended panoramic images in video-dermoscopy

    Science.gov (United States)

    Faraz, Khuram; Blondel, Walter; Daul, Christian

    2017-07-01

    This contribution presents a fast global adjustment scheme exploiting SURF descriptor locations for constructing large skin mosaics. Precision in pairwise image registration is well-preserved while significantly reducing the global mosaicing error.

  8. Inter- and intra-specific diurnal habitat selection of zooplankton during the spring bloom observed by Video Plankton Recorder

    DEFF Research Database (Denmark)

    Sainmont, Julie; Gislason, Astthor; Heuschele, Jan

    2014-01-01

    Recorder (VPR), a tool that allows mapping of vertical zooplankton distributions with a far greater spatial resolution than conventional zooplankton nets. The study took place over a full day–night cycle in Disko Bay, Greenland, during the peak of the phytoplankton spring bloom. The sampling revealed...... exposure) and thereby is likely to influence both state (hunger, weight and stage) and survival. The results suggest that the copepods select day and night time habitats with similar light levels (~10−9 μmol photon s−1 m−2). Furthermore, Calanus spp. displayed state-dependent behavior, with DVM most...

  9. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  10. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    Science.gov (United States)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  11. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-10-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  12. Fuzzy-Based Segmentation for Variable Font-Sized Text Extraction from Images/Videos

    Directory of Open Access Journals (Sweden)

    Samabia Tehsin

    2014-01-01

    Full Text Available Textual information embedded in multimedia can provide a vital tool for indexing and retrieval. A lot of work is done in the field of text localization and detection because of its very fundamental importance. One of the biggest challenges of text detection is to deal with variation in font sizes and image resolution. This problem gets elevated due to the undersegmentation or oversegmentation of the regions in an image. The paper addresses this problem by proposing a solution using novel fuzzy-based method. This paper advocates postprocessing segmentation method that can solve the problem of variation in text sizes and image resolution. The methodology is tested on ICDAR 2011 Robust Reading Challenge dataset which amply proves the strength of the recommended method.

  13. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  14. Automated Video Quality Assessment for Deep-Sea Video

    Science.gov (United States)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating

  15. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees.

    Science.gov (United States)

    Giraldo, Paula Jimena Ramos; Aguirre, Álvaro Guerrero; Muñoz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio

    2017-04-06

    Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i) Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii) Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  16. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees

    Directory of Open Access Journals (Sweden)

    Paula Jimena Ramos Giraldo

    2017-04-01

    Full Text Available Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  17. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  18. Imaging of Volume Phase Gratings in a Photosensitive Polymer, Recorded in Transmission and Reflection Geometry

    Directory of Open Access Journals (Sweden)

    Tina Sabel

    2014-02-01

    Full Text Available Volume phase gratings, recorded in a photosensitive polymer by two-beam interference exposure, are studied by means of optical microscopy. Transmission gratings and reflection gratings, with periods in the order of 10 μm down to 130 nm, were investigated. Mapping of holograms by means of imaging in sectional view is introduced to study reflection-type gratings, evading the resolution limit of classical optical microscopy. In addition, this technique is applied to examine so-called parasitic gratings, arising from interference from the incident reference beam and the reflected signal beam. The appearance and possible avoidance of such unintentionally recorded secondary structures is discussed.

  19. Functional analysis of voice using simultaneous high-speed imaging and acoustic recordings.

    Science.gov (United States)

    Yan, Yuling; Damrose, Edward; Bless, Diane

    2007-09-01

    We present a comprehensive, functional analysis of clinical voice data derived from both high-speed digital imaging (HSDI) of the larynx and simultaneously acquired acoustic recordings. The goals of this study are to: (1) correlate dynamic characteristics of the vocal folds derived from direct laryngeal imaging with indirectly acquired acoustic measurements; (2) define the advantages of using a combined imaging/acoustic approach for the analysis of voice condition; and (3) identify new quantitative measures to evaluate the regularity of the vocal fold vibration and the complexity of the vocal output -- these measures will be key to successful diagnosis of vocal abnormalities. Image- and acoustic-based analyses are performed using an analytic phase plot approach previously introduced by our group (referred to as 'Nyquist' plot). Fast Fourier Transform (FFT) spectral analyses are performed on the same data for a comparison. Clinical HSDI and acoustic recordings from subjects having normal and specific voice pathologies, including muscular tension dysphonia (MTD) and recurrent respiratory papillomatosis (RRP) were analyzed using the Nyquist plot approach. The results of these analyses show that a combined imaging/acoustic analysis approach provides better characterization of the vibratory behavior of the vocal folds as it correlates with vocal output and pathology.

  20. Luminal volume reconstruction from angioscopic video images of casts from human coronary arteries

    NARCIS (Netherlands)

    J.C.H. Schuurbiers (Johan); C.J. Slager (Cornelis); P.W.J.C. Serruys (Patrick)

    1994-01-01

    textabstractIntravascular angioscopy has been hampered by its limitation in quantifying obtained images. To circumvent this problem, a lightwire was used, which projects a ring of light onto the endoluminal wall in front of the angioscope. This investigation was designed to quantify luminal

  1. Embedded electronics for a video-rate distributed aperture passive millimeter-wave imager

    Science.gov (United States)

    Curt, Petersen F.; Bonnett, James; Schuetz, Christopher A.; Martin, Richard D.

    2013-05-01

    Optical upconversion for a distributed aperture millimeter wave imaging system is highly beneficial due to its superior bandwidth and limited susceptibility to EMI. These features mean the same technology can be used to collect information across a wide spectrum, as well as in harsh environments. Some practical uses of this technology include safety of flight in degraded visual environments (DVE), imaging through smoke and fog, and even electronic warfare. Using fiber-optics in the distributed aperture poses a particularly challenging problem with respect to maintaining coherence of the information between channels. In order to capture an image, the antenna aperture must be electronically steered and focused to a particular distance. Further, the state of the phased array must be maintained, even as environmental factors such as vibration, temperature and humidity adversely affect the propagation of the signals through the optical fibers. This phenomenon cannot be avoided or mitigated, but rather must be compensated for using a closed-loop control system. In this paper, we present an implementation of embedded electronics designed specifically for this purpose. This novel architecture is efficiently small, scalable to many simultaneously operating channels and sufficiently robust. We present our results, which include integration into a 220 channel imager and phase stability measurements as the system is stressed according to MIL-STD-810F vibration profiles of an H-53E heavy-lift helicopter.

  2. Annual National Vocational-Technical Teacher Education Seminar Proceedings: Micro-Teaching and Video Recording (3rd, Miami Beach, Fla., Oct. 20-23, 1969. Final Report. Leadership Series No. 25.

    Science.gov (United States)

    Cotrell, Calvin J., Ed.; Bice, Gary R., Ed.

    This second of two volumes resulting from a seminar attended by 232 vocational-technical leaders from 34 states and the District of Columbia, covers the general sessions and the sub-seminar on micro-teaching and video recording. General session presentations on teacher education by Martin W. Essex, Virgil S. Lagomarceno, and William G. Loomis are…

  3. Recording multiple spatially-heterodyned direct to digital holograms in one digital image

    Science.gov (United States)

    Hanson, Gregory R [Clinton, TN; Bingham, Philip R [Knoxville, TN

    2008-03-25

    Systems and methods are described for recording multiple spatially-heterodyned direct to digital holograms in one digital image. A method includes digitally recording, at a first reference beam-object beam angle, a first spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; Fourier analyzing the recorded first spatially-heterodyned hologram by shifting a first original origin of the recorded first spatially-heterodyned hologram to sit on top of a first spatial-heterodyne carrier frequency defined by the first reference beam-object beam angle; digitally recording, at a second reference beam-object beam angle, a second spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; Fourier analyzing the recorded second spatially-heterodyned hologram by shifting a second original origin of the recorded second spatially-heterodyned hologram to sit on top of a second spatial-heterodyne carrier frequency defined by the second reference beam-object beam angle; applying a first digital filter to cut off signals around the first original origin and define a first result; performing a first inverse Fourier transform on the first result; applying a second digital filter to cut off signals around the second original origin and define a second result; and performing a second inverse Fourier transform on the second result, wherein the first reference beam-object beam angle is not equal to the second reference beam-object beam angle and a single digital image includes both the first spatially-heterodyned hologram and the second spatially-heterodyned hologram.

  4. Determining the optimal age for recording the retinal vascular pattern image of lambs.

    Science.gov (United States)

    Rojas-Olivares, M A; Caja, G; Carné, S; Salama, A A K; Adell, N; Puig, P

    2012-03-01

    Newborn Ripollesa lambs (n = 143) were used to assess the optimal age at which the vascular pattern of the retina can be used as a reference for identification and traceability. Retinal images from both eyes were recorded from birth to yearling (d 1, 8, 30, 82, 180, and 388 of age) in duplicate (2,534 images) using a digital camera specially designed for livestock (Optibrand, Fort Collins, CO). Intra- and inter-age image comparisons (9,316 pairs of images) were carried out, and matching score (MS) was used as the exclusion criterion of lamb identity (MS ovino mayor," 6 mo of age and ~35 kg of BW, n = 59); and yearling replacement lambs (YR; >12 mo of age and ~50 kg of BW, n = 25). Values of MS were treated with a model based on the 1-inflated bivariate beta distribution, and treated data were compared by using a likelihood ratio test. Intra-age image comparisons showed that average MS and percentage of images with MS ≥70 increased (P 0.05); no differences were detected for 30-d images (97.4 and 98.0%, respectively, for RR and YR lambs; P > 0.05). Total percentage of matching was achieved when images were obtained from older lambs (180 and 388 d). In conclusion, retinal imaging was a useful tool for verifying the identity and auditing the traceability of live lambs from suckling to yearling. Matching scores were satisfactory when the reference retinal images were obtained from 1-mo-old or older lambs.

  5. Video Editing System

    Science.gov (United States)

    Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.

  6. The Prediction of Position and Orientation Parameters of Uav for Video Imaging

    Science.gov (United States)

    Wierzbicki, D.

    2017-08-01

    The paper presents the results of the prediction for the parameters of the position and orientation of the unmanned aerial vehicle (UAV) equipped with compact digital camera. Issue focus in this paper is to achieve optimal accuracy and reliability of the geo-referenced video frames on the basis of data from the navigation sensors mounted on UAV. In experiments two mathematical models were used for the process of the prediction: the polynomial model and the trigonometric model. The forecast values of position and orientation of UAV were compared with readings low cost GPS and INS sensors mounted on the unmanned Trimble UX-5 platform. Research experiment was conducted on the preview of navigation data from 23 measuring epochs. The forecast coordinate values and angles of the turnover and the actual readings of the sensor Trimble UX-5 were compared in this paper. Based on the results of the comparison it was determined that: the best results of co-ordinate comparison of an unmanned aerial vehicle received for the storage with, whereas worst for the coordinate Y on the base of both prediction models, obtained value of standard deviation for the coordinate XYZ from both prediction models does not cross over a admissible criterion 10 m for the term of the exactitudes of the position of a unmanned aircraft. The best results of the comparison of the angles of the turn of a unmanned aircraft received for the angle Pitch, whereas worst for the angles Heading and Roll on the base of both prediction models. Obtained value of standard deviation for the angles of turn HPR from both prediction models does not exceed a admissible exactitude 5° only for the angle Pitch, however crosses over this value for the angles Heading and Roll.

  7. Can CCTV identify people in public transit stations who are at risk of attempting suicide? An analysis of CCTV video recordings of attempters and a comparative investigation

    Directory of Open Access Journals (Sweden)

    Brian L. Mishara

    2016-12-01

    Full Text Available Abstract Background Suicides incur in all public transit systems which do not completely impede access to tracks. We conducted two studies to determine if we can reliably identify in stations people at risk of suicide in order to intervene in a timely manner. The first study analysed all CCTV recordings of suicide attempters in Montreal underground stations over 2 years to identify behaviours indicating suicide risk. The second study verified the potential of using those behaviours to discriminate attempters from other passengers in real time. Methods First study: Trained observers watched CCTV video recordings of 60 attempters, with 2–3 independent observers coding seven easily observable behaviours and five behaviours requiring interpretation (e.g. “strange behaviours,” “anxious behaviour”. Second study: We randomly mixed 63 five-minute CCTV recordings before an attempt with 56 recordings from the same cameras at the same time of day, and day of week, but when no suicide attempt was to occur. Thirty-three undergraduate students after only 10 min of instructions watched the recordings and indicated if they observed each of 13 behaviours identified in the First Study. Results First study: Fifty (83% of attempters had easily observable behaviours potentially indicative of an impending attempt, and 37 (61% had two or more of these behaviours. Forty-five (75% had at least one behaviours requiring interpretation. Twenty-two witnesses attempted to intervene to stop the attempt, and 75% of attempters had behaviours indicating possible ambivalence (e.g. waiting for several trains to pass; trying to get out of the path of the train. Second study: Two behaviours, leaving an object on the platform and pacing back and forth from the yellow line (just before the edge of the platform, could identify 24% of attempters with no false positives. The other target behaviours were also present in non-attempters. However, having two or more of these

  8. Can CCTV identify people in public transit stations who are at risk of attempting suicide? An analysis of CCTV video recordings of attempters and a comparative investigation.

    Science.gov (United States)

    Mishara, Brian L; Bardon, Cécile; Dupont, Serge

    2016-12-15

    Suicides incur in all public transit systems which do not completely impede access to tracks. We conducted two studies to determine if we can reliably identify in stations people at risk of suicide in order to intervene in a timely manner. The first study analysed all CCTV recordings of suicide attempters in Montreal underground stations over 2 years to identify behaviours indicating suicide risk. The second study verified the potential of using those behaviours to discriminate attempters from other passengers in real time. First study: Trained observers watched CCTV video recordings of 60 attempters, with 2-3 independent observers coding seven easily observable behaviours and five behaviours requiring interpretation (e.g. "strange behaviours," "anxious behaviour"). Second study: We randomly mixed 63 five-minute CCTV recordings before an attempt with 56 recordings from the same cameras at the same time of day, and day of week, but when no suicide attempt was to occur. Thirty-three undergraduate students after only 10 min of instructions watched the recordings and indicated if they observed each of 13 behaviours identified in the First Study. First study: Fifty (83%) of attempters had easily observable behaviours potentially indicative of an impending attempt, and 37 (61%) had two or more of these behaviours. Forty-five (75%) had at least one behaviours requiring interpretation. Twenty-two witnesses attempted to intervene to stop the attempt, and 75% of attempters had behaviours indicating possible ambivalence (e.g. waiting for several trains to pass; trying to get out of the path of the train). Second study: Two behaviours, leaving an object on the platform and pacing back and forth from the yellow line (just before the edge of the platform), could identify 24% of attempters with no false positives. The other target behaviours were also present in non-attempters. However, having two or more of these behaviours indicated a likelihood of being at risk of attempting

  9. Ventilator Data Extraction with a Video Display Image Capture and Processing System.

    Science.gov (United States)

    Wax, David B; Hill, Bryan; Levin, Matthew A

    2017-06-01

    Medical hardware and software device interoperability standards are not uniform. The result of this lack of standardization is that information available on clinical devices may not be readily or freely available for import into other systems for research, decision support, or other purposes. We developed a novel system to import discrete data from an anesthesia machine ventilator by capturing images of the graphical display screen and using image processing to extract the data with off-the-shelf hardware and open-source software. We were able to successfully capture and verify live ventilator data from anesthesia machines in multiple operating rooms and store the discrete data in a relational database at a substantially lower cost than vendor-sourced solutions.

  10. Overhead-Based Image and Video Geo-Localization Framework (Open Access)

    Science.gov (United States)

    2013-09-12

    States using 100 street-level query photos. The problem is very challenging because we are trying to match two het- erogenous image sources: a street...system on the whole Switzerland area . Bansal et al. [2] were able to match query street- level facades to airborne LIDAR imagery under challenging...cover imagery. This data covers various areas in the conti- nental United States and the world, but our system tested two world regions within the

  11. Color spaces in digital video

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    Whether it`s photography, computer graphics, publishing, or video; each medium has a defined color space, or gamut, which defines the extent that a given set of RGB colors can be mixed. When converting from one medium to another, an image must go through some form of conversion which maps colors into the destination color space. The conversion process isn`t always straight forward, easy, or reversible. In video, two common analog composite color spaces are Y`tjv (used in PAL) and Y`IQ (used in NTSC). These two color spaces have been around since the beginning of color television, and are primarily used in video transmission. Another analog scheme used in broadcast studios is Y`, R`-Y`, B`-Y` (used in Betacam and Mll) which is a component format. Y`, R`-Y`,B`-Y` maintains the color information of RGB but in less space. From this, the digital component video specification, ITU-Rec. 601-4 (formerly CCIR Rec. 601) was based. The color space for Rec. 601 is symbolized as Y`CbCr. Digital video formats such as DV, Dl, Digital-S, etc., use Rec. 601 to define their color gamut. Digital composite video (for D2 tape) is digitized analog Y`UV and is seeing decreased use. Because so much information is contained in video, segments of any significant length usually require some form of data compression. All of the above mentioned analog video formats are a means of reducing the bandwidth of RGB video. Video bulk storage devices, such as digital disk recorders, usually store frames in Y`CbCr format, even if no other compression method is used. Computer graphics and computer animations originate in RGB format because RGB must be used to calculate lighting and shadows. But storage of long animations in RGB format is usually cost prohibitive and a 30 frame-per-second data rate of uncompressed RGB is beyond most computers. By taking advantage of certain aspects of the human visual system, true color 24-bit RGB video images can be compressed with minimal loss of visual information

  12. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  13. Recent advances in recording electrophysiological data simultaneously with magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Laufs, H. [Univ Frankfurt, Zentrum Neurol and Neurochirurg, Neurol Klin, D-60590 Frankfurt (Germany); Laufs, H. [Univ Frankfurt, Dept Neurol, D-60590 Frankfurt (Germany); Laufs, H. [Univ Frankfurt, Brain Imaging Ctr, D-60590 Frankfurt (Germany); Laufs, H.; Carmichael, D.W. [UCL, Inst Neurol, Dept Clin and Expt Epilepsy, London (United Kingdom); Daunizeau, J. [Wellcome Trust Ctr Neuroimaging, London (United Kingdom); Kleinschmidt, A. [INSERM, Unite 562, F-91191 Gif SurYvette (France); Kleinschmidt, A. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Kleinschmidt, A. [Univ Paris 11, F-91405 Orsay (France)

    2008-07-01

    Simultaneous recording of brain activity by different neuro-physiological modalities can yield insights that reach beyond those obtained by each technique individually, even when compared to those from the post-hoc integration of results from each technique recorded sequentially. Success in the endeavour of real-time multimodal experiments requires special hardware and software as well as purpose-tailored experimental design and analysis strategies. Here,we review the key methodological issues in recording electrophysiological data in humans simultaneously with magnetic resonance imaging (MRI), focusing on recent technical and analytical advances in the field. Examples are derived from simultaneous electro-encephalography (EEG) and electromyography (EMG) during functional MRI in cognitive and systems neuroscience as well as in clinical neurology, in particular in epilepsy and movement disorders. We conclude with an outlook on current and future efforts to achieve true integration of electrical and haemodynamic measures of neuronal activity using data fusion models. (authors)

  14. Evaluation of a System for High-Accuracy 3D Image-Based Registration of Endoscopic Video to C-Arm Cone-Beam CT for Image-Guided Skull Base Surgery

    Science.gov (United States)

    Mirota, Daniel J.; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Taylor, Russell H.; Hager, Gregory D.; Siewerdsen, Jeffrey H.

    2014-01-01

    The safety of endoscopic skull base surgery can be enhanced by accurate navigation in preoperative computed tomography (CT) or, more recently, intraoperative cone-beam CT (CBCT). The ability to register real-time endoscopic video with CBCT offers an additional advantage by rendering information directly within the visual scene to account for intraoperative anatomical change. However, tracker localization error (~ 1–2 mm) limits the accuracy with which video and tomographic images can be registered. This paper reports the first implementation of image-based video-CBCT registration, conducts a detailed quantitation of the dependence of registration accuracy on system parameters, and demonstrates improvement in registration accuracy achieved by the image-based approach. Performance was evaluated as a function of parameters intrinsic to the image-based approach, including system geometry, CBCT image quality, and computational runtime. Overall system performance was evaluated in a cadaver study simulating transsphenoidal skull base tumor excision. Results demonstrated significant improvement (p < 0.001)in registration accuracy with a mean reprojection distance error of 1.28 mm for the image-based approach versus 1.82 mm for the conventional tracker-based method. Image-based registration was highly robust against CBCT image quality factors of noise and resolution, permitting integration with low-dose intraoperative CBCT. PMID:23372078

  15. Three-Dimensional Innervation Zone Imaging from Multi-Channel Surface EMG Recordings.

    Science.gov (United States)

    Liu, Yang; Ning, Yong; Li, Sheng; Zhou, Ping; Rymer, William Z; Zhang, Yingchun

    2015-09-01

    There is an unmet need to accurately identify the locations of innervation zones (IZs) of spastic muscles, so as to guide botulinum toxin (BTX) injections for the best clinical outcome. A novel 3D IZ imaging (3DIZI) approach was developed by combining the bioelectrical source imaging and surface electromyogram (EMG) decomposition methods to image the 3D distribution of IZs in the target muscles. Surface IZ locations of motor units (MUs), identified from the bipolar map of their MU action potentials (MUAPs) were employed as a prior knowledge in the 3DIZI approach to improve its imaging accuracy. The performance of the 3DIZI approach was first optimized and evaluated via a series of designed computer simulations, and then validated with the intramuscular EMG data, together with simultaneously recorded 128-channel surface EMG data from the biceps of two subjects. Both simulation and experimental validation results demonstrate the high performance of the 3DIZI approach in accurately reconstructing the distributions of IZs and the dynamic propagation of internal muscle activities in the biceps from high-density surface EMG recordings.

  16. A 3-D nonlinear recursive digital filter for video image processing

    Science.gov (United States)

    Bauer, P. H.; Qian, W.

    1991-01-01

    This paper introduces a recursive 3-D nonlinear digital filter, which is capable of performing noise suppression without degrading important image information such as edges in space or time. It also has the property of unnoticeable bandwidth reduction immediately after a scene change, which makes the filter an attractive preprocessor to many interframe compression algorithms. The filter consists of a nonlinear 2-D spatial subfilter and a 1-D temporal filter. In order to achieve the required computational speed and increase the flexibility of the filter, all of the linear shift-variant filter modules are of the IIR type.

  17. Visualizing Music: The Archaeology of Music-Video.

    Science.gov (United States)

    Berg, Charles M.

    Music videos, with their characteristic visual energy and frenetic music-and-dance numbers, have caught on rapidly since their introduction in 1981, bringing prosperity to a slumping record industry. Creating images to accompany existing music is, however, hardly a new idea. The concept can be traced back to 1877 and Thomas Edison's invention of…

  18. Multisensor fusion in gastroenterology domain through video and echo endoscopic image combination: a challenge

    Science.gov (United States)

    Debon, Renaud; Le Guillou, Clara; Cauvin, Jean-Michel; Solaiman, Basel; Roux, Christian

    2001-08-01

    Medical domain makes intensive use of information fusion. In particular, the gastro-enterology is a discipline where physicians have the choice between several imagery modalities that offer complementary advantages. Among all existing systems, videoendoscopy (based on a CCD sensor) and echoendoscopy (based on an ultrasound sensor) are the most efficient. The use of each system corresponds to a given step in the physician diagnostic elaboration. Nowadays, several works aim to achieve automatic interpretation of videoendoscopic sequences. These systems can quantify color and superficial textures of the digestive tube. Unfortunately the relief information, which is important for the diagnostic, is very difficult to retrieve. On the other hand, some studies have proved that 3D information can be easily quantified using echoendoscopy image sequences. That is why the idea to combine these information, acquired from two very different points of view, can be considered as a real challenge for the medical image fusion topic. In this paper, after a review of actual works concerning numerical exploitation of videoendoscopy and echoendoscopy, the following question will be discussed: how can the use of complementary aspects of the different systems ease the automatic exploitation of videoendoscopy ? In a second time, we will evaluate the feasibility of the achievement of a realistic 3D reconstruction based both on information given by echoendoscopy (relief) and videoendoscopy (texture). Enumeration of potential applications of such a fusion system will then follow. Further discussions and perspectives will conclude this first study.

  19. Exploring the clinical decision-making used by experienced cardiorespiratory physiotherapists: A mixed method qualitative design of simulation, video recording and think aloud techniques.

    Science.gov (United States)

    Thackray, Debbie; Roberts, Lisa

    2017-02-01

    The ability of physiotherapists to make clinical decisions is a vital component of being an autonomous practitioner, yet this complex phenomenon has been under-researched in cardiorespiratory physiotherapy. The purpose of this study was to explore clinical decision-making (CDM) by experienced physiotherapists in a scenario of a simulated patient experiencing acute deterioration of their respiratory function. The main objective of this observational study was to identify the actions, thoughts, and behaviours used by experienced cardiorespiratory physiotherapists in their clinical decision-making processes. A mixed-methods (qualitative) design employing observation and think-aloud, was adopted using a computerised manikin in a simulated environment. The participants clinically assessed the manikin programmed with the same clinical signs, under standardised conditions in the clinical skills practice suite, which was set up as a ward environment. Experienced cardiorespiratory physiotherapists, recruited from clinical practice within a 50-mile radius of the University(*). Participants were video-recorded throughout the assessment and treatment and asked to verbalise their thought processes using the 'think-aloud' method. The recordings were transcribed verbatim and managed using a Framework approach. Eight cardiorespiratory physiotherapists participated (mean 7years clinical experience, range 3.5-16years. CDM was similar to the collaborative hypothetico-deductive model, five-rights nursing model, reasoning strategies, inductive reasoning and pattern recognition. However, the CDM demonstrated by the physiotherapists was complex, interactive and iterative. Information processing occurred continuously throughout the whole interaction with the patient, and the specific cognitive skills of recognition, matching, discriminating, relating, inferring, synthesising and prediction were identified as being used sequentially. The findings from this study were used to develop a new

  20. Quantifying fish swimming behavior in response to acute exposure of aqueous copper using computer assisted video and digital image analysis

    Science.gov (United States)

    Calfee, Robin D.; Puglis, Holly J.; Little, Edward E.; Brumbaugh, William G.; Mebane, Christopher A.

    2016-01-01

    Behavioral responses of aquatic organisms to environmental contaminants can be precursors of other effects such as survival, growth, or reproduction. However, these responses may be subtle, and measurement can be challenging. Using juvenile white sturgeon (Acipenser transmontanus) with copper exposures, this paper illustrates techniques used for quantifying behavioral responses using computer assisted video and digital image analysis. In previous studies severe impairments in swimming behavior were observed among early life stage white sturgeon during acute and chronic exposures to copper. Sturgeon behavior was rapidly impaired and to the extent that survival in the field would be jeopardized, as fish would be swept downstream, or readily captured by predators. The objectives of this investigation were to illustrate protocols to quantify swimming activity during a series of acute copper exposures to determine time to effect during early lifestage development, and to understand the significance of these responses relative to survival of these vulnerable early lifestage fish. With mortality being on a time continuum, determining when copper first affects swimming ability helps us to understand the implications for population level effects. The techniques used are readily adaptable to experimental designs with other organisms and stressors.

  1. Developing a Video Steganography Toolkit

    OpenAIRE

    Ridgway, James; Stannett, Mike

    2014-01-01

    Although techniques for separate image and audio steganography are widely known, relatively little has been described concerning the hiding of information within video streams ("video steganography"). In this paper we review the current state of the art in this field, and describe the key issues we have encountered in developing a practical video steganography system. A supporting video is also available online at http://www.youtube.com/watch?v=YhnlHmZolRM

  2. A New Learning Control System for Basketball Free Throws Based on Real Time Video Image Processing and Biofeedback

    Directory of Open Access Journals (Sweden)

    R. Sarang

    2018-02-01

    Full Text Available Shooting free throws plays an important role in basketball. The major problem in performing a correct free throw seems to be inappropriate training. Training is performed offline and it is often not that persistent. The aim of this paper is to consciously modify and control the free throw using biofeedback. Elbow and shoulder dynamics are calculated by an image processing technique equipped with a video image acquisition system. The proposed setup in this paper, named learning control system, is able to quantify and provide feedback of the above parameters in real time as audio signals. Therefore, it yielded to performing a correct learning and conscious control of shooting. Experimental results showed improvements in the free throw shooting style including shot pocket and locked position. The mean values of elbow and shoulder angles were controlled approximately on 89o and 26o, for shot pocket and also these angles were tuned approximately on 180o and 47o respectively for the locked position (closed to the desired pattern of the free throw based on valid FIBA references. Not only the mean values enhanced but also the standard deviations of these angles decreased meaningfully, which shows shooting style convergence and uniformity. Also, in training conditions, the average percentage of making successful free throws increased from about 64% to even 87% after using this setup and in competition conditions the average percentage of successful free throws enhanced about 20%, although using the learning control system may not be the only reason for these outcomes. The proposed system is easy to use, inexpensive, portable and real time applicable.

  3. Automated in-core image generation from video to aid visual inspection of nuclear power plant cores

    Energy Technology Data Exchange (ETDEWEB)

    Murray, Paul, E-mail: paul.murray@strath.ac.uk [Department of Electronic and Electrical Engineering, University of Strathclyde, Technology and Innovation Centre, 99 George Street, Glasgow, G1 1RD (United Kingdom); West, Graeme; Marshall, Stephen; McArthur, Stephen [Dept. Electronic and Electrical Engineering, University of Strathclyde, Royal College Building, 204 George Street, Glasgow G1 1XW (United Kingdom)

    2016-04-15

    Highlights: • A method is presented which improves visual inspection of reactor cores. • Significant time savings are made to activities on the critical outage path. • New information is extracted from existing data sources without additional overhead. • Examples from industrial case studies across the UK fleet of AGR stations. - Abstract: Inspection and monitoring of key components of nuclear power plant reactors is an essential activity for understanding the current health of the power plant and ensuring that they continue to remain safe to operate. As the power plants age, and the components degrade from their initial start-of-life conditions, the requirement for more and more detailed inspection and monitoring information increases. Deployment of new monitoring and inspection equipment on existing operational plant is complex and expensive, as the effect of introducing new sensing and imaging equipment to the existing operational functions needs to be fully understood. Where existing sources of data can be leveraged, the need for new equipment development and installation can be offset by the development of advanced data processing techniques. This paper introduces a novel technique for creating full 360° panoramic images of the inside surface of fuel channels from in-core inspection footage. Through the development of this technique, a number of technical challenges associated with the constraints of using existing equipment have been addressed. These include: the inability to calibrate the camera specifically for image stitching; dealing with additional data not relevant to the panorama construction; dealing with noisy images; and generalising the approach to work with two different capture devices deployed at seven different Advanced Gas Cooled Reactor nuclear power plants. The resulting data processing system is currently under formal assessment with a view to replacing the existing manual assembly of in-core defect montages. Deployment of the

  4. NOTE: Recording accelerator monitor units during electronic portal imaging: application to collimator position verification during IMRT

    Science.gov (United States)

    Glendinning, A. G.; Hunt, S. G.; Bonnett, D. E.

    2001-06-01

    The application of multiple portal image acquisition to collimator position verification during dynamic multileaf collimation (DMLC) using a commercial camera-based electronic portal imaging device (EPID) (TheraviewTM, Cablon Medical BV, Leusden, The Netherlands) mounted on an Elekta SL15i accelerator (Elekta Oncology Systems, Crawley, UK) is described. This is achieved using a custom-built dose acquisition system optically interfaced to both the camera control unit of the EPID and the monitor unit (MU) channel of the accelerator. The method uses the beam blanking camera control signal to trigger the dose acquisition system to read the cumulative accelerator MUs at the beginning and end of each period of image formation. A maximum delay of 15 ms has been estimated for recording of accelerator MUs in the current system. The camera interface was observed to have no effect on the operation of the EPID during normal clinical use and could therefore be left permanently in situ. Use of the system for collimator position verification of a test case is presented. The technique described uses a specific camera-based EPID and accelerator, although the general principle of using an EPID control signal to trigger recording of accelerator MUs may be applicable to other EPIDs/accelerators with suitable knowledge of the accelerator dosimetry system.

  5. Imaging and recording subventricular zone progenitor cells in live tissue of postnatal mice

    Directory of Open Access Journals (Sweden)

    Benjamin Lacar

    2010-07-01

    Full Text Available The subventricular zone (SVZ is one of two regions where neurogenesis persists in the postnatal brain. The SVZ, located along the lateral ventricle, is the largest neurogenic zone in the brain that contains multiple cell populations including astrocyte-like cells and neuroblasts. Neuroblasts migrate in chains to the olfactory bulb where they differentiate into interneurons. Here, we discuss the experimental approaches to record the electrophysiology of these cells and image their migration and calcium activity in acute slices. Although these techniques were in place for studying glial cells and neurons in mature networks, the SVZ raises new challenges due to the unique properties of SVZ cells, the cellular diversity, and the architecture of the region. We emphasize different methods, such as the use of transgenic mice and in vivo electroporation that permit identification of the different SVZ cell populations for patch clamp recording or imaging. Electroporation also permits genetic labeling of cells using fluorescent reporter mice and modification of the system using either RNA interference technology or floxed mice. In this review, we aim to provide conceptual and technical details of the approaches to perform electrophysiological and imaging studies of SVZ cells.

  6. Monochromatic blue light entrains diel activity cycles in the Norway lobster, Nephrops norvegicus (L. as measured by automated video-image analysis

    Directory of Open Access Journals (Sweden)

    Jacopo Aguzzi

    2009-12-01

    Full Text Available There is growing interest in developing automated, non-invasive techniques for long-lasting, laboratory-based monitoring of behaviour in organisms from deep-water continental margins which are of ecological and commercial importance. We monitored the burrow emergence rhythms in the Norway lobster, Nephrops norvegicus, which included: a characterising the regulation of behavioural activity outside the burrow under monochromatic blue light-darkness (LD cycles of 0.1 lx, recreating slope photic conditions (i.e. 200-300 m depth and constant darkness (DD, which is necessary for the study of the circadian system; b testing the performance of a newly designed digital video-image analysis system for tracking locomotor activity. We used infrared USB web cameras and customised software (in Matlab 7.1 to acquire and process digital frames of eight animals at a rate of one frame per minute under consecutive photoperiod stages for nine days each: LD, DD, and LD (subdivided into two stages, LD1 and LD2, for analysis purposes. The automated analysis allowed the production of time series of locomotor activity based on movements of the animals’ centroids. Data were studied with periodogram, waveform, and Fourier analyses. For the first time, we report robust diurnal burrow emergence rhythms during the LD period, which became weak in DD. Our results fit with field data accounting for midday peaks in catches at the depth of slopes. The comparison of the present locomotor pattern with those recorded at different light intensities clarifies the regulation of the clock of N. norvegicus at different depths.

  7. Power Distortion Optimization for Uncoded Linear Transformed Transmission of Images and Videos.

    Science.gov (United States)

    Xiong, Ruiqin; Zhang, Jian; Wu, Feng; Xu, Jizheng; Gao, Wen

    2017-01-01

    Recently, there is a resurgence of interest in uncoded transmission for wireless visual communication. While conventional coded systems suffer from cliff effect as the channel condition varies dynamically, uncoded linear-transformed transmission (ULT) provides elegant quality degradation for wide channel SNR range. ULT skips non-linear operations, such as quantization and entropy coding. Instead, it utilizes linear decorrelation transform and linear scaling power allocation to achieve optimized transmission. This paper presents a theoretical analysis for power-distortion optimization of ULT. In addition to the observation in our previous work that a decorrelation transform can bring significant performance gain, this paper reveals that exploiting the energy diversity in transformed signal is the key to achieve the full potential of decorrelation transform. In particular, we investigated the efficiency of ULT with exact or inexact signal statistics, highlighting the impact of signal energy modeling accuracy. Based on that, we further proposed two practical energy modeling schemes for ULT of visual signals. Experimental results show that the proposed schemes improve the quality of reconstructed images by 3~5 dB, while reducing the signal modeling overhead from hundreds or thousands of meta data to only a few meta data. The perceptual quality of reconstruction is significantly improved.

  8. Power-Distortion Optimization for Uncoded Linear-Transformed Transmission of Images and Videos.

    Science.gov (United States)

    Xiong, Ruiqin; Zhang, Jian; Wu, Feng; Xu, Jizheng; Gao, Wen

    2016-10-26

    Recently there is a resurgence of interest in uncoded transmission for wireless visual communication. While conventional coded systems suffer from cliff effect as the channel condition varies dynamically, uncoded linear-transformed transmission (ULT) provides elegant quality degradation for wide channel SNR range. ULT skips non-linear operations such as quantization and entropy coding. Instead, it utilizes linear decorrelation transform and linear scaling power allocation to achieve optimized transmission. This paper presents a theoretical analysis for power-distortion optimization of ULT. In addition to the observation in our previous work that a decorrelation transform can bring significant performance gain, this work reveals that exploiting the energy diversity in transformed signal is the key to achieve the full potential of decorrelation transform. In particular, we investigated the efficiency of ULT with exact or inexact signal statistics, highlighting the impact of signal energy modeling accuracy. Based on that, we further proposed two practical energy modeling schemes for ULT of visual signals. Experimental results show that the proposed schemes improve the quality of reconstructed images by 3 5dB, while reducing the signal modeling overhead from hundreds or thousands of meta data to only a few meta data. The perceptual quality of reconstruction is significantly improved.

  9. Image/video understanding systems based on network-symbolic models and active vision

    Science.gov (United States)

    Kuvich, Gary

    2004-07-01

    Vision is a part of information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. It is hard to split the entire system apart, and vision mechanisms cannot be completely understood separately from informational processes related to knowledge and intelligence. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Vision is a component of situation awareness, motion and planning systems. Foveal vision provides semantic analysis, recognizing objects in the scene. Peripheral vision guides fovea to salient objects and provides scene context. Biologically inspired Network-Symbolic representation, in which both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding precise artificial computations of 3-D models. Network-Symbolic transformations derive more abstract structures that allows for invariant recognition of an object as exemplar of a class and for a reliable identification even if the object is occluded. Systems with such smart vision will be able to navigate in real environment and understand real-world situations.

  10. Active vision and image/video understanding systems for UGV based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-09-01

    Vision evolved as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it has become a vital component of situation awareness, navigation and planning systems. Vision is part of a larger information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. It is hard to split such a system apart. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for natural processing of visual information. It converts visual information into relational Network-Symbolic models, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in such models and used for disambiguation of visual information. Network-Symbolic transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps create unambiguous network-symbolic models. This approach is consistent with NIST RCS. The UGV, equipped with such smart vision, will be able to plan path and navigate in a real environment, perceive and understand complex real-world situations and act accordingly.

  11. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  12. Image-guided depth propagation for 2-D-to-3-D video conversion using superpixel matching and adaptive autoregressive model

    Science.gov (United States)

    Cai, Jiji; Jung, Cheolkon

    2017-09-01

    We propose image-guided depth propagation for two-dimensional (2-D)-to-three-dimensional (3-D) video conversion using superpixel matching and the adaptive autoregressive (AR) model. We adopt key frame-based depth propagation that propagates the depth map in the key frame to nonkey frames. Moreover, we use the adaptive AR model for depth refinement to penalize depth-color inconsistency. First, we perform superpixel matching to estimate motion vectors at the superpixel level instead of block matching based on the fixed block size. Then, we conduct depth compensation based on motion vectors to generate the depth map in the nonkey frame. However, the size of two superpixels is not exactly the same due to the segment-based matching, which causes matching errors in the compensated depth map. Thus, we introduce an adaptive image-guided AR model to minimize matching errors and produce the final depth map by minimizing AR prediction errors. Finally, we employ depth-image-based rendering to generate stereoscopic views from 2-D videos and their depth maps. Experimental results demonstrate that the proposed method successfully performs depth propagation and produces high-quality depth maps for 2-D-to-3-D video conversion.

  13. [Video documentation in forensic practice].

    Science.gov (United States)

    Schyma, C; Schyma, P

    1995-01-01

    The authors report in part 1 about their experiences with the Canon Ex1 Hi camcorder and the possibilities of documentation with the modern video technique. Application examples in legal medicine and criminalistics are described autopsy, scene, reconstruction of crimes etc. The online video documentation of microscopic sessions makes the discussion of findings easier. The use of video films for instruction produces a good resonance. The use of the video documentation can be extended by digitizing (Part 2). Two frame grabbers are presented, with which we obtained good results in digitizing of images captured from video. The best quality of images is achieved by online use of an image analysis chain. Corel 5.0 and PicEd Cora 4.0 allow complete image processings and analysis. The digital image processing influences the objectivity of the documentation. The applicabilities of image libraries are discussed.

  14. Web tools for effective retrieval, visualization, and evaluation of cardiology medical images and records

    Science.gov (United States)

    Masseroli, Marco; Pinciroli, Francesco

    2000-12-01

    To provide easy retrieval, integration and evaluation of multimodal cardiology images and data in a web browser environment, distributed application technologies and java programming were used to implement a client-server architecture based on software agents. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. The client side is a Java applet running in a web browser and providing a friendly medical user interface to perform queries on patient and medical test dat and integrate and visualize properly the various query results. A set of tools based on Java Advanced Imaging API enables to process and analyze the retrieved cardiology images, and quantify their features in different regions of interest. The platform-independence Java technology makes the developed prototype easy to be managed in a centralized form and provided in each site where an intranet or internet connection can be located. Giving the healthcare providers effective tools for querying, visualizing and evaluating comprehensively cardiology medical images and records in all locations where they can need them- i.e. emergency, operating theaters, ward, or even outpatient clinics- the developed prototype represents an important aid in providing more efficient diagnoses and medical treatments.

  15. Integration of Transport-relevant Data within Image Record of the Surveillance System

    Directory of Open Access Journals (Sweden)

    Adam Stančić

    2016-10-01

    Full Text Available Integration of the collected information on the road within the image recorded by the surveillance system forms a unified source of transport-relevant data about the supervised situation. The basic assumption is that the procedure of integration changes the image to the extent that is invisible to the human eye, and the integrated data keep identical content. This assumption has been proven by studying the statistical properties of the image and integrated data using mathematical model modelled in the programming language Python using the combinations of the functions of additional libraries (OpenCV, NumPy, SciPy and Matplotlib. The model has been used to compare the input methods of meta-data and methods of steganographic integration by correcting the coefficients of Discrete Cosine Transform JPEG compressed image. For the procedures of steganographic data processing the steganographic algorithm F5 was used. The review paper analyses the advantages and drawbacks of the integration methods and present the examples of situations in traffic in which the formed unified sources of transport-relevant information could be used.

  16. Free-viewpoint video synthesis from mixed resolution multi-view images and low resolution depth maps

    Science.gov (United States)

    Emori, Takaaki; Tehrani, Mehrdad P.; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    Streaming application of multi-view and free-viewpoint video is potentially attractive but due to the limitation of bandwidth, transmitting all multi-view video in high resolution may not be feasible. Our goal is to propose a new streaming data format that can be adapted to the limited bandwidth and capable of free-viewpoint video streaming using multi-view video plus depth (MVD). Given a requested free-viewpoint, we use the two closest views and corresponding depth maps to perform free-viewpoint video synthesis. We propose a new data format that consists of all views and corresponding depth maps in a lowered resolution, and the two closest views to the requested viewpoint in the high resolution. When the requested viewpoint changes, the two closest viewpoints will change, but one or both of views are transmitted only in the low resolution during the delay time. Therefore, the resolution compensation is required. In this paper, we investigated several cases where one or both of the views are transmitted only in the low resolution. We proposed adequate view synthesis method for multi resolution multi-view video plus depth. Experimental results show that our framework achieves view synthesis quality close to high resolution multi-view video plus depth.

  17. Image Fusion Through Multi-Resolution Contrast Decomposition

    Science.gov (United States)

    1989-07-07

    perceptie het incest relevant zijn. Ter illustratie worden de resul- taten getoond die werden vcrkregen door de combinatie van wamte- en visuele...registered CCD and FIR images on video tape. The images were thereafter digitized and brought in register. Finally we digitally merged corresponding images...in the scene. The signals from both cameras were recorded on synchronized U-matlc video taperecorders. Image merging Fig. 9 shows the CCD (Fig. 9a

  18. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  19. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Top Temperature (CTT) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  20. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Imagery (not Near Constant Contrast) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  1. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Near Constant Contrast (NCC) Imagery Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  2. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Land Surface Temperature (LST) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of Land Surface Temperature (LST) from the Visible Infrared Imaging Radiometer Suite...

  3. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Ice Surface Temperature (IST) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  4. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Volcanic Ash Detection and Height Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of volcanic ash from the Visible Infrared Imaging Radiometer (VIIRS) instrument...

  5. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Sea Ice Characterization (SIC) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains an Environmental Data Record (EDR) of Sea Ice Characterization (SIC) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument...

  6. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Base Height (CBH) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of Cloud Base Heights (CBH) from the Visible Infrared Imaging Radiometer Suite...

  7. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Ice Thickness and Age Environmental Data Records (EDRs) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of Ice Thickness and Age from the Visible Infrared Imaging Radiometer Suite (VIIRS)...

  8. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Type and Phase Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of cloud type and phase from the Visible Infrared Imaging Radiometer Suite (VIIRS)...

  9. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Ocean Color/Chlorophyll (OCC) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of Ocean Color/Chlorophyll (OCC) from the Visible Infrared Imaging Radiometer Suite...

  10. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Top Height (CTH) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  11. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Cover Layer (CCL) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality Environmental Data Record (EDR) of Cloud Cover Layers (CCL) from the Visible Infrared Imaging Radiometer Suite (VIIRS)...

  12. Images.

    Science.gov (United States)

    Barr, Catherine, Ed.

    1997-01-01

    The theme of this month's issue is "Images"--from early paintings and statuary to computer-generated design. Resources on the theme include Web sites, CD-ROMs and software, videos, books, and others. A page of reproducible activities is also provided. Features include photojournalism, inspirational Web sites, art history, pop art, and myths. (AEF)

  13. Functional Magnetic Resonance Imaging and Functional Near-Infrared Spectroscopy: Insights from Combined Recording Studies.

    Science.gov (United States)

    Scarapicchia, Vanessa; Brown, Cassandra; Mayo, Chantel; Gawryluk, Jodie R

    2017-01-01

    Although blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) is a widely available, non-invasive technique that offers excellent spatial resolution, it remains limited by practical constraints imposed by the scanner environment. More recently, functional near infrared spectroscopy (fNIRS) has emerged as an alternative hemodynamic-based approach that possesses a number of strengths where fMRI is limited, most notably in portability and higher tolerance for motion. To date, fNIRS has shown promise in its ability to shed light on the functioning of the human brain in populations and contexts previously inaccessible to fMRI. Notable contributions include infant neuroimaging studies and studies examining full-body behaviors, such as exercise. However, much like fMRI, fNIRS has technical constraints that have limited its application to clinical settings, including a lower spatial resolution and limited depth of recording. Thus, by combining fMRI and fNIRS in such a way that the two methods complement each other, a multimodal imaging approach may allow for more complex research paradigms than is feasible with either technique alone. In light of these issues, the purpose of the current review is to: (1) provide an overview of fMRI and fNIRS and their associated strengths and limitations; (2) review existing combined fMRI-fNIRS recording studies; and (3) discuss how their combined use in future research practices may aid in advancing modern investigations of human brain function.

  14. Underwater reflectance transformation imaging: a technology for in situ underwater cultural heritage object-level recording

    Science.gov (United States)

    Selmo, David; Sturt, Fraser; Miles, James; Basford, Philip; Malzbender, Tom; Martinez, Kirk; Thompson, Charlie; Earl, Graeme; Bevan, George

    2017-01-01

    There is an increasing demand for high-resolution recording of in situ underwater cultural heritage. Reflectance transformation imaging (RTI) has a proven track record in terrestrial contexts for acquiring high-resolution diagnostic data at small scales. The research presented here documents the first adaptation of RTI protocols to the subaquatic environment, with a scuba-deployable method designed around affordable off-the-shelf technologies. Underwater RTI (URTI) was used to capture detail from historic shipwrecks in both the Solent and the western Mediterranean. Results show that URTI can capture submillimeter levels of qualitative diagnostic detail from in situ archaeological material. In addition, this paper presents the results of experiments to explore the impact of turbidity on URTI. For this purpose, a prototype fixed-lighting semisubmersible RTI photography dome was constructed to allow collection of data under controlled conditions. The signal-to-noise data generated reveals that the RGB channels of underwater digital images captured in progressive turbidity degraded faster than URTI object geometry calculated from them. URTI is shown to be capable of providing analytically useful object-level detail in conditions that would render ordinary underwater photography of limited use.

  15. Video tracking in the extreme: video analysis for nocturnal underwater animal movement.

    Science.gov (United States)

    Patullo, B W; Jolley-Rogers, G; Macmillan, D L

    2007-11-01

    Computer analysis of video footage is one option for recording locomotor behavior for a range of neurophysiological and behavioral studies. This technique is reasonably well established and accepted, but its use for some behavioral analyses remains a challenge. For example, filming through water can lead to reflection, and filming nocturnal activity can reduce resolution and clarity of filmed images. The aim of this study was to develop a noninvasive method for recording nocturnal activity in aquatic decapods and test the accuracy of analysis by video tracking software. We selected crayfish, Cherax destructor, because they are often active at night, they live underwater, and data on their locomotion is important for answering biological and physiological questions such as how they explore and navigate. We constructed recording arenas and filmed animals in infrared light. Wethen compared human observer data and software-acquired values. In this article, we outline important apparatus and software issues to obtain reliable computer tracking.

  16. Examining the effect of task on viewing behavior in videos using saliency maps

    Science.gov (United States)

    Alers, Hani; Redi, Judith A.; Heynderickx, Ingrid

    2012-03-01

    Research has shown that when viewing still images, people will look at these images in a different manner if instructed to evaluate their quality. They will tend to focus less on the main features of the image and, instead, scan the entire image area looking for clues for its level of quality. It is questionable, however, whether this finding can be extended to videos considering their dynamic nature. One can argue that when watching a video the viewer will always focus on the dynamically changing features of the video regardless of the given task. To test whether this is true, an experiment was conducted where half of the participants viewed videos with the task of quality evaluation while the other half were simply told to watch the videos as if they were watching a movie on TV or a video downloaded from the internet. The videos contained content which was degraded with compression artifacts over a wide range of quality. An eye tracking device was used to record the viewing behavior in both conditions. By comparing the behavior during each task, it was possible to observe a systematic difference in the viewing behavior which seemed to correlate to the quality of the videos.

  17. Video imaging of cytosolic Ca2+ in pancreatic beta-cells stimulated by glucose, carbachol, and ATP.

    Science.gov (United States)

    Theler, J M; Mollard, P; Guérineau, N; Vacher, P; Pralong, W F; Schlegel, W; Wollheim, C B

    1992-09-05

    In order to define the differences in the distribution of cytosolic free Ca2+ ([Ca2+]i) in pancreatic beta-cells stimulated with the fuel secretagogue glucose or the Ca(2+)-mobilizing agents carbachol and ATP, we applied digital video imaging to beta-cells loaded with fura-2.83% of the cells responded to glucose with an increase in [Ca2+]i after a latency of 117 +/- 24 s (mean +/- S.E., 85 cells). Of these cells, 16% showed slow wave oscillations (frequency 0.35/min). In order to assess the relationship between membrane potential and the distribution of the [Ca2+]i rise, digital image analysis and perforated patch-clamp methods were applied simultaneously. The system used allowed sufficient temporal resolution to visualize a subplasmalemmal Ca2+ transient due to a single glucose-induced action potential. Glucose could also elicit a slow depolarization which did not cause Ca2+ influx until the appearance of the first of a train of action potentials. [Ca2+]i rose progressively during spike firing. Inhibition of Ca2+ influx by EGTA abolished the glucose-induced rise in [Ca2+]i. In contrast, the peak amplitude of the [Ca2+]i response to carbachol was not significantly different in normal or in Ca(2+)-deprived medium. Occasionally, the increase of the [Ca2+]i rise was polarized to one area of the cell different from the subplasmalemmal rise caused by glucose. The amplitude of the response and the number of responding cells were significantly increased when carbachol was applied after the addition of high glucose (11.2 mM). ATP also raised [Ca2+]i and promoted both Ca2+ mobilization and Ca2+ influx. The intracellular distribution of [Ca2+]i was homogeneous during the onset of the response. A polarity in the [Ca2+]i distribution could be detected either in the descending phase of the peak or in subsequent peaks during [Ca2+]i oscillations caused by ATP. In the absence of extracellular Ca2+, the sequential application of ATP and carbachol revealed that carbachol was still

  18. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  19. Imaging three-dimensional innervation zone distribution in muscles from M-wave recordings

    Science.gov (United States)

    Zhang, Chuan; Peng, Yun; Liu, Yang; Li, Sheng; Zhou, Ping; Zev Rymer, William; Zhang, Yingchun

    2017-06-01

    Objective. To localize neuromuscular junctions in skeletal muscles in vivo which is of great importance in understanding, diagnosing and managing of neuromuscular disorders. Approach. A three-dimensional global innervation zone imaging technique was developed to characterize the global distribution of innervation zones, as an indication of the location and features of neuromuscular junctions, using electrically evoked high-density surface electromyogram recordings. Main results. The performance of the technique was evaluated in the biceps brachii of six intact human subjects. The geometric centers of the distributions of the reconstructed innervation zones were determined with a mean distance of 9.4  ±  1.4 cm from the reference plane, situated at the medial epicondyle of the humerus. A mean depth was calculated as 1.5  ±  0.3 cm from the geometric centers to the closed points over the skin. The results are consistent with those reported in previous histology studies. It was also found that the volumes and distributions of the reconstructed innervation zones changed as the stimulation intensities increased until the supramaximal muscle response was achieved. Significance. Results have demonstrated the high performance of the proposed imaging technique in noninvasively imaging global distributions of the innervation zones in the three-dimensional muscle space in vivo, and the feasibility of its clinical applications, such as guiding botulinum toxin injections in spasticity management, or in early diagnosis of neurodegenerative progression of amyotrophic lateral sclerosis.

  20. Computer simulation of radiographic images sharpness in several system of image record; Processamento para simulacao da nitidez de imagens radiograficas para diversos sistemas de registro da imagem

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Marcia Aparecida; Schiable, Homero; Frere, Annie France; Marques, Paulo M.A. [Sao Paulo Univ., Sao Carlos, SP (Brazil). Escola de Engenharia. Dept. de Engenharia Eletrica; Oliveira, Henrique J.Q. de; Alves, Fatima F.R. [Sao Paulo Univ., Sao Carlos, SP (Brazil). Inst. de Fisica; Medeiros, Regina B. [Universidade Federal de Sao Paulo, SP (Brazil). Escola Paulista de Medicina. Dept. de Diagnostico por Imagem

    1996-12-31

    A method to predict the influence of the record system on radiographic images sharpness by computer simulation is studied. The method intend to previously show the image to be obtained for each type of film or screen-film combination used during the exposure 8 refs., 2 figs.