WorldWideScience

Sample records for video images recorded

  1. Usefulness of video images from a X-ray simulator in recordings of the treatment portal of pulmonary lesion

    International Nuclear Information System (INIS)

    Nishioka, Masayuki; Sakurai, Makoto; Fujioka, Tomio; Fukuoka, Masahiro; Kusunoki, Yoko; Nakajima, Toshifumi; Onoyama, Yasuto.

    1992-01-01

    Movement of the target volume should be taken into consideration in treatment planning. Respiratory movement is the greatest motion in radiotherapy for the pulmonary lesion. We combined video with a X-ray simulator to record movement. Of 50 patients whose images were recorded, respiratory movements of 0 to 4 mm, of 5 to 9 mm, and of more than 10 mm were observed in 13, 21, and 16 patients, respectively. Discrepancies of 5 to 9 mm and of more than 10 mm between simulator films and video images were observed in 14 and 13 patients, respectively. These results show that video images are useful in recording the movement while considering respiratory motion. We recommend that video system added to a X-ray simulator is used for treatment planning, especially in radiotherapy for the pulmonary lesion. (author)

  2. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  3. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  4. Video Toroid Cavity Imager

    Energy Technology Data Exchange (ETDEWEB)

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  5. Video Recordings in Public Libraries.

    Science.gov (United States)

    Doyle, Stephen

    1984-01-01

    Reports on development and operation of public library collection of video recordings, describes results of user survey conducted over 6-month period, and offers brief guidelines. Potential users, censorship and copyright, organization of collection, fees, damage and loss, funding, purchasing and promotion, formats, processing and cataloging,…

  6. Dedicated data recording video system for Spacelab experiments

    Science.gov (United States)

    Fukuda, Toshiyuki; Tanaka, Shoji; Fujiwara, Shinji; Onozuka, Kuniharu

    1984-04-01

    A feasibility study of video tape recorder (VTR) modification to add the capability of data recording etc. was conducted. This system is an on-broad system to support Spacelab experiments as a dedicated video system and a dedicated data recording system to operate independently of the normal operation of the Orbiter, Spacelab and the other experiments. It continuously records the video image signals with the acquired data, status and operator's voice at the same time on one cassette video tape. Such things, the crews' actions, animals' behavior, microscopic views and melting materials in furnace, etc. are recorded. So, it is expected that experimenters can make a very easy and convenient analysis of the synchronized video, voice and data signals in their post flight analysis.

  7. Data compression systems for home-use digital video recording

    NARCIS (Netherlands)

    With, de P.H.N.; Breeuwer, M.; van Grinsven, P.A.M.

    1992-01-01

    The authors focus on image data compression techniques for digital recording. Image coding for storage equipment covers a large variety of systems because the applications differ considerably in nature. Video coding systems suitable for digital TV and HDTV recording and digital electronic still

  8. Multiple Generations on Video Tape Recorders.

    Science.gov (United States)

    Wiens, Jacob H.

    Helical scan video tape recorders were tested for their dubbing characteristics in order to make selection data available to media personnel. The equipment, two recorders of each type tested, was submitted by the manufacturers. The test was designed to produce quality evaluations for three generations of a single tape, thereby encompassing all…

  9. Digital video recording and archiving in ophthalmic surgery

    Directory of Open Access Journals (Sweden)

    Raju Biju

    2006-01-01

    Full Text Available Currently most ophthalmic operating rooms are equipped with an analog video recording system [analog Charge Couple Device camera for video grabbing and a Video Cassette Recorder for recording]. We discuss the various advantages of a digital video capture device, its archiving capabilities and our experience during the transition from analog to digital video recording and archiving. The basic terminology and concepts related to analog and digital video, along with the choice of hardware, software and formats for archiving are discussed.

  10. Video stereopsis of cardiac MR images

    International Nuclear Information System (INIS)

    Johnson, R.F. Jr.; Norman, C.

    1988-01-01

    This paper describes MR images of the heart acquired using a spin-echo technique synchronized to the electrocardiogram. Sixteen 0.5-cm-thick sections with a 0.1-cm gap between each section were acquired in the coronal view to cover all the cardiac anatomy including vasculature. Two sets of images were obtained with a subject rotation corresponding to the stereoscopic viewing angle of the eyes. The images were digitized, spatially registered, and processed by a three-dimensional graphics work station for stereoscopic viewing. Video recordings were made of each set of images and then temporally synchronized to produce a single video image corresponding to the appropriate eye view

  11. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  12. Radiation effects on video imagers

    International Nuclear Information System (INIS)

    Yates, G.J.; Bujnosek, J.J.; Jaramillo, S.A.; Walton, R.B.; Martinez, T.M.; Black, J.P.

    1985-01-01

    Radiation sensitivity of several photoconductive, photoemissive, and solid state silicon-based video imagers was measured by analyzing stored photocharge induced by irradiation with continuous and pulsed sources of high energy photons and neutrons. Transient effects as functions of absorbed dose, dose rate, fluences, and ionizing particle energy are presented

  13. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  14. Computational multispectral video imaging [Invited].

    Science.gov (United States)

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  15. Implications of the law on video recording in clinical practice.

    Science.gov (United States)

    Henken, Kirsten R; Jansen, Frank Willem; Klein, Jan; Stassen, Laurents P S; Dankelman, Jenny; van den Dobbelsteen, John J

    2012-10-01

    Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear understanding of the legal framework is lacking. Therefore, this research aims to provide insight into the juridical position of patients and professionals regarding video recording in health care practice. Jurisprudence was searched to exemplify legislation on video recording in health care. In addition, legislation was translated for different applications of video in health care found in the literature. Three principles in Western law are relevant for video recording in health care practice: (1) regulations on privacy regarding personal data, which apply to the gathering and processing of video data in health care settings; (2) the patient record, in which video data can be stored; and (3) professional secrecy, which protects the privacy of patients including video data. Practical implementation of these principles in video recording in health care does not exist. Practical regulations on video recording in health care for different specifically defined purposes are needed. Innovations in video capture technology that enable video data to be made anonymous automatically can contribute to protection for the privacy of all the people involved.

  16. Implications of the law on video recording in clinical practice

    NARCIS (Netherlands)

    K.R. Henken (Kirsten R.); F-W. Jansen (Frank-Willem); J. Klein (Jan); L.P. Stassen (Laurents); J. Dankelman (Jenny); J.J. van den Dobbelsteen (John)

    2012-01-01

    textabstractBackground: Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A

  17. Implications of the law on video recording in clinical practice

    NARCIS (Netherlands)

    Henken, K.R.; Jansen, F.W.; Klein, J.; Stassen, L.P.S.; Dankelman, J.; Van den Dobbelsteen, J.J.

    2012-01-01

    Background Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear

  18. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  19. 3D reconstruction of cystoscopy videos for comprehensive bladder records.

    Science.gov (United States)

    Lurie, Kristen L; Angst, Roland; Zlatev, Dimitar V; Liao, Joseph C; Ellerbee Bowden, Audrey K

    2017-04-01

    White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize a 3D model of organs from an endoscopic video that captures the shape and surface appearance of the organ. A key aspect of our strategy is the use of advanced computer vision techniques and unmodified, clinical-grade endoscopy hardware with few constraints on the image acquisition protocol, which presents a low barrier to clinical translation. We validate the accuracy and robustness of our reconstruction and co-registration method using cystoscopy videos from tissue-mimicking bladder phantoms and show clinical utility during cystoscopy in the operating room for bladder cancer evaluation. As our method can powerfully augment the visual medical record of the appearance of internal organs, it is broadly applicable to endoscopy and represents a significant advance in cancer surveillance opportunities for big-data cancer research.

  20. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  1. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    2000-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed, higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  2. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    1991-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  3. Implications of the law on video recording in clinical practice

    OpenAIRE

    Henken, Kirsten R.; Jansen, Frank-Willem; Klein, Jan; Stassen, Laurents; Dankelman, Jenny; Dobbelsteen, John

    2012-01-01

    textabstractBackground: Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear understanding of the legal framework is lacking. Therefore, this research aims to provide insight into the juridical position of patients and professionals regarding video recording in health ...

  4. Video Recording and the Research Process

    Science.gov (United States)

    Leung, Constant; Hawkins, Margaret R.

    2011-01-01

    This is a two-part discussion. Part 1 is entitled "English Language Learning in Subject Lessons", and Part 2 is titled "Video as a Research Tool/Counterpoint". Working with different research concerns, the authors attempt to draw attention to a set of methodological and theoretical issues that have emerged in the research process using video data.…

  5. Clients experience of video recordings of their psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Jensen, Karen Boelt; Madsen, Ninna Skov

    the current relatively widespread use video one finds only a very limited numbers empirical study of how these recordings is experienced by the clients. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents a qualitative, explorative study of clients’ experiences......Background: Due to the development of technologies and the low costs video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  6. Super VHS video cassette recorder, A-SB88; Super VHS video A-SB88

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    A super VHS video cassette recorder, A-SB88, was commercialized having no compromised aspects at all in picture quality, sound quality, operability, energy conservation, design, etc. In the picture quality, the VCR is mounted with the S-ET system capable of realizing a quality comparable to SVHS with a three-dimensional Y/C detached circuit for dynamic moving image detection, three-dimensional DNR(digital noise reduction) and TBC(time base corrector), FE(flying erase) circuit, and a normal tape. In the operability, it is provided with a remote control transfer in large LCD, 400x high speed rewind, reservation system capable of simply reserving a serial drama for example, and a function for searching the end of picture recording; also, in the environmental aspect, the stand-by power consumption was reduced to 1/10 of conventional models (ratio with Toshiba A-BS6 at display power off). (translated by NEDO)

  7. Selection and evaluation of video tape recorders for surveillance applications

    International Nuclear Information System (INIS)

    Martinez, R.L.

    1988-01-01

    Unattended surveillance places unique requirements on video recorders. One such requireemnt, extended operational reliability, often cannot be determined from the manufacturers' data. Subsequent to market surveys and preliminary testing, the Sony 8mm EVO-210 recorder was selected for use in the Modular Integrated Video System (MIVS), while concurrently undergoing extensive reliability testing. A microprocessor based controller was developed to life test and evaluate the performance of the video cassette recorders. The controller has the capability to insert a unique binary count in the vertical interval of the recorder video signal for each scene. This feature allows for automatic verification of the recorded data using a MIVS Review Station. Initially, twenty recorders were subjected to the accelerated lift test, which involves recording one scene (eight video frames) every 15 seconds. The recorders were operated in the exact manner in which they are utilized in the MIVS. This paper describes the results of the preliminary testing, accelerated life test and the extensive testing on 130 Sony EVO-210 recorders

  8. High-resolution X-ray television and high-resolution video recorders

    International Nuclear Information System (INIS)

    Haendle, J.; Horbaschek, H.; Alexandrescu, M.

    1977-01-01

    The improved transmission properties of the high-resolution X-ray television chain described here make it possible to transmit more information per television image. The resolution in the fluoroscopic image, which is visually determined, depends on the dose rate and the inertia of the television pick-up tube. This connection is discussed. In the last few years, video recorders have been increasingly used in X-ray diagnostics. The video recorder is a further quality-limiting element in X-ray television. The development of function patterns of high-resolution magnetic video recorders shows that this quality drop may be largely overcome. The influence of electrical band width and number of lines on the resolution in the X-ray television image stored is explained in more detail. (orig.) [de

  9. Guided filtering for solar image/video processing

    Directory of Open Access Journals (Sweden)

    Long Xu

    2017-06-01

    Full Text Available A new image enhancement algorithm employing guided filtering is proposed in this work for enhancement of solar images and videos, so that users can easily figure out important fine structures imbedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determination of interesting solar burst activities from recorded images/movies.

  10. Progress in video immersion using Panospheric imaging

    Science.gov (United States)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  11. Recorded peer video chat as a research and development tool

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Cowie, Bronwen

    2016-01-01

    When practising teachers take time to exchange their experiences and reflect on their teaching realities as critical friends, they add meaning and depth to educational research. When peer talk is facilitated through video chat platforms, teachers can meet (virtually) face to face even when...... recordings were transcribed and used to prompt further discussion. The recording of the video chat meetings provided an opportunity for researchers to listen in and follow up on points they felt needed further unpacking or clarification. The recorded peer video chat conversations provided an additional...... opportunity to stimulate and support teacher participants in a process of critical analysis and reflection on practice. The discussions themselves were empowering because in the absence of the researcher, the teachers, in negotiation with peers, choose what is important enough to them to take time to discuss....

  12. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  13. Structural image and video understanding

    NARCIS (Netherlands)

    Lou, Z.

    2016-01-01

    In this thesis, we have discussed how to exploit the structures in several computer vision topics. The five chapters addressed five computer vision topics using the image structures. In chapter 2, we proposed a structural model to jointly predict the age, expression and gender of a face. By modeling

  14. Video-based noncooperative iris image segmentation.

    Science.gov (United States)

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  15. Image and Video for Hearing Impaired People

    Directory of Open Access Journals (Sweden)

    Aran Oya

    2007-01-01

    Full Text Available We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL and the cued speech (CS language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.

  16. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  17. 3D reconstruction of cystoscopy videos for comprehensive bladder records

    OpenAIRE

    Lurie, Kristen L.; Angst, Roland; Zlatev, Dimitar V.; Liao, Joseph C.; Ellerbee Bowden, Audrey K.

    2017-01-01

    White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize ...

  18. THE DETERMINATION OF THE SHARPNESS DEPTH BORDERS AND CORRESPONDING PHOTOGRAPHY AND VIDEO RECORDING PARAMETERS FOR CONTEMPORARY VIDEO TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    E. G. Zaytseva

    2011-01-01

    Full Text Available The method of determination of the sharpness depth borders was improved for contemporary video technology. The computer programme for determination of corresponding video recording parameters was created.

  19. Video dosimetry: evaluation of X-radiation dose by video fluoroscopic image

    International Nuclear Information System (INIS)

    Nova, Joao Luiz Leocadio da; Lopes, Ricardo Tadeu

    1996-01-01

    A new methodology to evaluate the entrance surface dose on patients under radiodiagnosis is presented. A phantom is used in video fluoroscopic procedures in on line video signal system. The images are obtained from a Siemens Polymat 50 and are digitalized. The results show that the entrance surface dose can be obtained in real time from video imaging

  20. GPM GROUND VALIDATION PRECIPITATION VIDEO IMAGER (PVI) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Precipitation Video Imager (PVI) GCPEx dataset collected precipitation particle images and drop size distribution data from November 2011...

  1. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  2. Biased lineup instructions and face identification from video images.

    Science.gov (United States)

    Thompson, W Burt; Johnson, Jaime

    2008-01-01

    Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video.

  3. EEG in the classroom: Synchronised neural recordings during video presentation

    DEFF Research Database (Denmark)

    Poulsen, Andreas Trier; Kamronn, Simon Due; Dmochowski, Jacek

    2017-01-01

    We performed simultaneous recordings of electroencephalography (EEG) from multiple students in a classroom, and measured the inter-subject correlation (ISC) of activity evoked by a common video stimulus. The neural reliability, as quantified by ISC, has been linked to engagement and attentional......-evoked neural responses, known to be modulated by attention, can be tracked for groups of students with synchronized EEG acquisition. This is a step towards real-time inference of engagement in the classroom....

  4. Comparison of cardiopulmonary resuscitation techniques using video camera recordings.

    OpenAIRE

    Mann, C J; Heyworth, J

    1996-01-01

    OBJECTIVE--To use video recordings to compare the performance of resuscitation teams in relation to their previous training in cardiac resuscitation. METHODS--Over a 10 month period all cardiopulmonary resuscitations carried out in an accident and emergency (A&E) resuscitation room were videotaped. The following variables were monitored: (1) time to perform three defibrillatory shocks; (2) time to give intravenous adrenaline (centrally or peripherally); (3) the numbers and grade of medical an...

  5. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  6. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  7. Dynamic Image Stitching for Panoramic Video

    Directory of Open Access Journals (Sweden)

    Jen-Yu Shieh

    2014-10-01

    Full Text Available The design of this paper is based on the Dynamic image titching for panoramic video. By utilizing OpenCV visual function data library and SIFT algorithm as the basis for presentation, this article brings forward Gaussian second differenced MoG which is processed basing on DoG Gaussian Difference Map to reduce order in synthesizing dynamic images and simplify the algorithm of the Gaussian pyramid structure. MSIFT matches with overlapping segmentation method to simplify the scope of feature extraction in order to enhance speed. And through this method traditional image synthesis can be improved without having to take lots of time in calculation and being limited by space and angle. This research uses four normal Webcams and two IPCAM coupled with several-wide angle lenses. By using wide-angle lenses to monitor over a wide range of an area and then by using image stitching panoramic effect is achieved. In terms of overall image application and control interface, Microsoft Visual Studio C# is adopted to a construct software interface. On a personal computer with 2.4-GHz CPU and 2-GB RAM and with the cameras fixed to it, the execution speed is three images per second, which reduces calculation time of the traditional algorithm.

  8. Multimodal interaction in image and video applications

    CERN Document Server

    Sappa, Angel D

    2013-01-01

    Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...

  9. Video Vortex reader II: moving images beyond YouTube

    NARCIS (Netherlands)

    Lovink, G.; Somers Miles, R.

    2011-01-01

    Video Vortex Reader II is the Institute of Network Cultures' second collection of texts that critically explore the rapidly changing landscape of online video and its use. With the success of YouTube ('2 billion views per day') and the rise of other online video sharing platforms, the moving image

  10. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    Science.gov (United States)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  11. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  12. Cell Phone Video Recording Feature as a Language Learning Tool: A Case Study

    Science.gov (United States)

    Gromik, Nicolas A.

    2012-01-01

    This paper reports on a case study conducted at a Japanese national university. Nine participants used the video recording feature on their cell phones to produce weekly video productions. The task required that participants produce one 30-second video on a teacher-selected topic. Observations revealed the process of video creation with a cell…

  13. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  14. Diagnostic image quality of video-digitized chest images

    International Nuclear Information System (INIS)

    Winter, L.H.; Butler, R.B.; Becking, W.B.; Warnars, G.A.O.; Haar Romeny, B. ter; Ottes, F.P.; Valk, J.-P.J. de

    1989-01-01

    The diagnostic accuracy obtained with the Philips picture archiving and communications subsystem was investigated by means of an observer performance study using receiver operating characteristic (ROC) analysis. The image qualities of conventional films and video digitized images were compared. The scanner had a 1024 x 1024 x 8 bit memory. The digitized images were displayed on a 60 Hz interlaced display monitor 1024 lines. Posteroanterior (AP) roetgenograms of a chest phantom with superimposed simulated interstitial pattern disease (IPD) were produced; there were 28 normal and 40 abnormal films. Normal films were produced by the chest phantom alone. Abnormal films were taken of the chest phantom with varying degrees of superimposed simulated intersitial disease (PND) for an observer performance study, because the results of a simulated interstitial pattern disease study are less likely to be influenced by perceptual capabilities. The conventional films and the video digitized images were viewed by five experienced observers during four separate sessions. Conventional films were presented on a viewing box, the digital images were displayed on the monitor described above. The presence of simulated intersitial disease was indicated on a 5-point ROC certainty scale by each observer. We analyzed the differences between ROC curves derived from correlated data statistically. The mean time required to evaluate 68 digitized images is approximately four times the mean time needed to read the convential films. The diagnostic quality of the video digitized images was significantly lower (at the 5% level) than that of the conventional films (median area under the curve (AUC) of 0.71 and 0.94, respectively). (author). 25 refs.; 2 figs.; 4 tabs

  15. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  16. Analysis of two dimensional charged particle scintillation using video image processing techniques

    International Nuclear Information System (INIS)

    Sinha, A.; Bhave, B.D.; Singh, B.; Panchal, C.G.; Joshi, V.M.; Shyam, A.; Srinivasan, M.

    1993-01-01

    A novel method for video recording of individual charged particle scintillation images and their offline analysis using digital image processing techniques for obtaining position, time and energy information is presented . Results of an exploratory experiment conducted using 241 Am and 239 Pu alpha sources are presented. (author). 3 figs., 4 tabs

  17. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  18. Potential usefulness of a video printer for producing secondary images from digitized chest radiographs

    Science.gov (United States)

    Nishikawa, Robert M.; MacMahon, Heber; Doi, Kunio; Bosworth, Eric

    1991-05-01

    Communication between radiologists and clinicians could be improved if a secondary image (copy of the original image) accompanied the radiologic report. In addition, the number of lost original radiographs could be decreased, since clinicians would have less need to borrow films. The secondary image should be simple and inexpensive to produce, while providing sufficient image quality for verification of the diagnosis. We are investigating the potential usefulness of a video printer for producing copies of radiographs, i.e. images printed on thermal paper. The video printer we examined (Seikosha model VP-3500) can provide 64 shades of gray. It is capable of recording images up to 1,280 pixels by 1,240 lines and can accept any raster-type video signal. The video printer was characterized in terms of its linearity, contrast, latitude, resolution, and noise properties. The quality of video-printer images was also evaluated in an observer study using portable chest radiographs. We found that observers could confirm up to 90 of the reported findings in the thorax using video- printer images, when the original radiographs were of high quality. The number of verified findings was diminished when high spatial resolution was required (e.g. detection of a subtle pneumothorax) or when a low-contrast finding was located in the mediastinal area or below the diaphragm (e.g. nasogastric tubes).

  19. Video-EEG recording: a four-year clinical audit.

    LENUS (Irish Health Repository)

    O'Rourke, K

    2012-02-03

    In the setting of a regional neurological unit without an epilepsy surgery service as in our case, video-EEG telemetry is undertaken for three main reasons; to investigate whether frequent paroxysmal events represent seizures when there is clinical doubt, to attempt anatomical localization of partial seizures when standard EEG is unhelpful, and to attempt to confirm that seizures are non-epileptic when this is suspected. A clinical audit of all telemetry performed over a four-year period was carried out, in order to determine the clinical utility of this aspect of the service and to determine means of improving effectiveness in the unit. Analysis of the data showed a high rate of negative studies with no attacks recorded. Of the positive studies approximately 50% showed non-epileptic attacks. Strategies for improving the rate of positive investigations are discussed.

  20. Nesting behavior of Palila, as assessed from video recordings

    Science.gov (United States)

    Laut, M.E.; Banko, P.C.; Gray, E.M.

    2003-01-01

    We quantified nesting behavior of Palila (Loxiodes bailleui), an endangered Hawaiian honeycreeper, by recording at nests during three breeding seasons using a black-and-white video camera connected to a Videocassette recorder. A total of seven nests was observed. We measured the following factors for daylight hours: percentage of time the female was on the nest (attendance), length of attendance bouts by the female, length of nest recesses, and adult provisioning rates. Comparisons were made between three stages of the 40-day nesting cycle: incubation (day 1-day 16), early nestling stage (day 17-day 30 [i.e., nestlings ??? 14 days old]), and late nestling stage (day 31-day 40 [i.e., nestlings > 14 days old]). Of seven nests observed, four fledged at least one nestling and three failed. One of these failed nests was filmed being depredated by a feral cat (Felis catus). Female nest attendance was near 82% during the incubation stage and decreased to 21% as nestlings aged. We did not detect a difference in attendance bout length between stages of the nesting cycle. Mean length of nest recesses increased from 4.5 min during the incubation stage to over 45 min during the late nestling stage. Mean number of nest recesses per hour ranged from 1.6 to 2.0. Food was delivered to nestlings by adults an average of 1.8 times per hour for the early nestling stage and 1.5 times per hour during the late nestling stage and did not change over time. Characterization of parental behavior by video had similarities to but also key differences from findings taken from blind observations. Results from this study will facilitate greater understanding of Palila reproductive strategies.

  1. Seizure semiology inferred from clinical descriptions and from video recordings. How accurate are they?

    DEFF Research Database (Denmark)

    Beniczky, Simona Alexandra; Fogarasi, András; Neufeld, Miri

    2012-01-01

    To assess how accurate the interpretation of seizure semiology is when inferred from witnessed seizure descriptions and from video recordings, five epileptologists analyzed 41 seizures from 30 consecutive patients who had clinical episodes in the epilepsy monitoring unit. For each clinical episode...... for the descriptions (k=0.67) and almost perfect for the video recordings (k=0.95). Video recordings significantly increase the accuracy of seizure interpretation....

  2. Markerless registration for image guided surgery. Preoperative image, intraoperative video image, and patient

    International Nuclear Information System (INIS)

    Kihara, Tomohiko; Tanaka, Yuko

    1998-01-01

    Real-time and volumetric acquisition of X-ray CT, MR, and SPECT is the latest trend of the medical imaging devices. A clinical challenge is to use these multi-modality volumetric information complementary on patient in the entire diagnostic and surgical processes. The intraoperative image and patient integration intents to establish a common reference frame by image in diagnostic and surgical processes. This provides a quantitative measure during surgery, for which we have been relied mostly on doctors' skills and experiences. The intraoperative image and patient integration involves various technologies, however, we think one of the most important elements is the development of markerless registration, which should be efficient and applicable to the preoperative multi-modality data sets, intraoperative image, and patient. We developed a registration system which integrates preoperative multi-modality images, intraoperative video image, and patient. It consists of a real-time registration of video camera for intraoperative use, a markerless surface sampling matching of patient and image, our previous works of markerless multi-modality image registration of X-ray CT, MR, and SPECT, and an image synthesis on video image. We think these techniques can be used in many applications which involve video camera like devices such as video camera, microscope, and image Intensifier. (author)

  3. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    Science.gov (United States)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  4. Extended image differencing for change detection in UAV video mosaics

    Science.gov (United States)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  5. Mass-storage management for distributed image/video archives

    Science.gov (United States)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  6. Mobile, portable lightweight wireless video recording solutions for homeland security, defense, and law enforcement applications

    Science.gov (United States)

    Sandy, Matt; Goldburt, Tim; Carapezza, Edward M.

    2015-05-01

    It is desirable for executive officers of law enforcement agencies and other executive officers in homeland security and defense, as well as first responders, to have some basic information about the latest trend on mobile, portable lightweight wireless video recording solutions available on the market. This paper reviews and discusses a number of studies on the use and effectiveness of wireless video recording solutions. It provides insights into the features of wearable video recording devices that offer excellent applications for the category of security agencies listed in this paper. It also provides answers to key questions such as: how to determine the type of video recording solutions most suitable for the needs of your agency, the essential features to look for when selecting a device for your video needs, and the privacy issues involved with wearable video recording devices.

  7. Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings

    Science.gov (United States)

    Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas

    2011-01-01

    Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.

  8. How to implement live video recording in the clinical environment: A practical guide for clinical services.

    Science.gov (United States)

    Lloyd, Adam; Dewar, Alistair; Edgar, Simon; Caesar, Dave; Gowens, Paul; Clegg, Gareth

    2017-06-01

    The use of video in healthcare is becoming more common, particularly in simulation and educational settings. However, video recording live episodes of clinical care is far less routine. To provide a practical guide for clinical services to embed live video recording. Using Kotter's 8-step process for leading change, we provide a 'how to' guide to navigate the challenges required to implement a continuous video-audit system based on our experience of video recording in our emergency department resuscitation rooms. The most significant hurdles in installing continuous video audit in a busy clinical area involve change management rather than equipment. Clinicians are faced with considerable ethical, legal and data protection challenges which are the primary barriers for services that pursue video recording of patient care. Existing accounts of video use rarely acknowledge the organisational and cultural dimensions that are key to the success of establishing a video system. This article outlines core implementation issues that need to be addressed if video is to become part of routine care delivery. By focussing on issues such as staff acceptability, departmental culture and organisational readiness, we provide a roadmap that can be pragmatically adapted by all clinical environments, locally and internationally, that seek to utilise video recording as an approach to improving clinical care. © 2017 John Wiley & Sons Ltd.

  9. Does Wearable Medical Technology With Video Recording Capability Add Value to On-Call Surgical Evaluations?

    Science.gov (United States)

    Gupta, Sameer; Boehme, Jacqueline; Manser, Kelly; Dewar, Jannine; Miller, Amie; Siddiqui, Gina; Schwaitzberg, Steven D

    2016-10-01

    Background Google Glass has been used in a variety of medical settings with promising results. We explored the use and potential value of an asynchronous, near-real time protocol-which avoids transmission issues associated with real-time applications-for recording, uploading, and viewing of high-definition (HD) visual media in the emergency department (ED) to facilitate remote surgical consults. Study Design First-responder physician assistants captured pertinent aspects of the physical examination and diagnostic imaging using Google Glass' HD video or high-resolution photographs. This visual media were then securely uploaded to the study website. The surgical consultation then proceeded over the phone in the usual fashion and a clinical decision was made. The surgeon then accessed the study website to review the uploaded video. This was followed by a questionnaire regarding how the additional data impacted the consultation. Results The management plan changed in 24% (11) of cases after surgeons viewed the video. Five of these plans involved decision making regarding operative intervention. Although surgeons were generally confident in their initial management plan, confidence scores increased further in 44% (20) of cases. In addition, we surveyed 276 ED patients on their opinions regarding concerning the practice of health care providers wearing and using recording devices in the ED. The survey results revealed that the majority of patients are amenable to the addition of wearable technology with video functionality to their care. Conclusions This study demonstrates the potential value of a medically dedicated, hands-free, HD recording device with internet connectivity in facilitating remote surgical consultation. © The Author(s) 2016.

  10. Video event data recording of a taxi driver used for diagnosis of epilepsy

    Directory of Open Access Journals (Sweden)

    Kotaro Sakurai

    2014-01-01

    Full Text Available A video event data recorder (VEDR in a motor vehicle records images before and after a traffic accident. This report describes a taxi driver whose seizures were recorded by VEDR, which was extremely useful for the diagnosis of epilepsy. The patient was a 63-year-old right-handed Japanese male taxi driver. He collided with a streetlight. Two years prior to this incident, he raced an engine for a long time while parked. The VEDR enabled confirmation that the accidents depended on an epileptic seizure and he was diagnosed with symptomatic localization-related epilepsy. The VEDR is useful not only for traffic accident evidence; it might also contribute to a driver's health care and road safety.

  11. Localizing wushu players on a platform based on a video recording

    Science.gov (United States)

    Peczek, Piotr M.; Zabołotny, Wojciech M.

    2017-08-01

    This article describes the development of a method to localize an athlete during sports performance on a platform, based on a static video recording. Considered sport for this method is wushu - martial art. However, any other discipline can be applied. There are specified requirements, and 2 algorithms of image processing are described. The next part presents an experiment that was held based on recordings from the Pan American Wushu Championship. Based on those recordings the steps of the algorithm are shown. Results are evaluated manually. The last part of the article concludes if the algorithm is applicable and what improvements have to be implemented to use it during sports competitions as well as for offline analysis.

  12. Introducing video recording in primary care midwifery for research purposes: procedure, dataset, and use.

    NARCIS (Netherlands)

    Spelten, E.R.; Martin, L.; Gitsels, J.T.; Pereboom, M.T.R.; Hutton, E.K.; Dulmen, S. van

    2015-01-01

    Background: video recording studies have been found to be complex; however very few studies describe the actual introduction and enrolment of the study, the resulting dataset and its interpretation. In this paper we describe the introduction and the use of video recordings of health care provider

  13. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  14. Coding the Complexity of Activity in Video Recordings

    DEFF Research Database (Denmark)

    Harter, Christopher Daniel; Otrel-Cass, Kathrin

    2017-01-01

    This paper presents a theoretical approach to coding and analyzing video data on human interaction and activity, using principles found in cultural historical activity theory. The systematic classification or coding of information contained in video data on activity can be arduous and time...... Bødker’s in 1996, three possible areas of expansion to Susanne Bødker’s method for analyzing video data were found. Firstly, a technological expansion due to contemporary developments in sophisticated analysis software, since the mid 1990’s. Secondly, a conceptual expansion, where the applicability...... of using Activity Theory outside of the context of human–computer interaction, is assessed. Lastly, a temporal expansion, by facilitating an organized method for tracking the development of activities over time, within the coding and analysis of video data. To expand on the above areas, a prototype coding...

  15. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  16. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  17. Video-rate optical flow corrected intraoperative functional fluorescence imaging

    NARCIS (Netherlands)

    Koch, Maximilian; Glatz, Juergen; Ermolayev, Vladimir; de Vries, Elisabeth G. E.; van Dam, Gooitzen M.; Englmeier, Karl-Hans; Ntziachristos, Vasilis

    Intraoperative fluorescence molecular imaging based on targeted fluorescence agents is an emerging approach to improve surgical and endoscopic imaging and guidance. Short exposure times per frame and implementation at video rates are necessary to provide continuous feedback to the physician and

  18. The influence of video recordings on beginning therapist’s learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    2010-01-01

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  19. The influence of video recordings on beginning therapists’ learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  20. The art of assessing quality for images and video

    International Nuclear Information System (INIS)

    Deriche, M.

    2011-01-01

    The early years of this century have witnessed a tremendous growth in the use of digital multimedia data for di?erent communication applications. Researchers from around the world are spending substantial research efforts in developing techniques for improving the appearance of images/video. However, as we know, preserving high quality is a challenging task. Images are subject to distortions during acquisition, compression, transmission, analysis, and reconstruction. For this reason, the research area focusing on image and video quality assessment has attracted a lot of attention in recent years. In particular, compression applications and other multimedia applications need powerful techniques for evaluating quality objectively without human interference. This tutorial will cover the di?erent faces of image quality assessment. We will motivate the need for robust image quality assessment techniques, then discuss the main algorithms found in the literature with a critical perspective. We will present the di?erent metrics used for full reference, reduced reference and no reference applications. We will then discuss the difference between image and video quality assessment. In all of the above, we will take a critical approach to explain which metric can be used for which application. Finally we will discuss the different approaches to analyze the performance of image/video quality metrics, and end the tutorial with some perspectives on newly introduced metrics and their potential applications.

  1. Live lecture versus video-recorded lecture: are students voting with their feet?

    Science.gov (United States)

    Cardall, Scott; Krupat, Edward; Ulrich, Michael

    2008-12-01

    In light of educators' concerns that lecture attendance in medical school has declined, the authors sought to assess students' perceptions, evaluations, and motivations concerning live lectures compared with accelerated, video-recorded lectures viewed online. The authors performed a cross-sectional survey study of all first- and second-year students at Harvard Medical School. Respondents answered questions regarding their lecture attendance; use of class and personal time; use of accelerated, video-recorded lectures; and reasons for viewing video-recorded and live lectures. Other questions asked students to compare how well live and video-recorded lectures satisfied learning goals. Of the 353 students who received questionnaires, 204 (58%) returned responses. Collectively, students indicated watching 57.2% of lectures live, 29.4% recorded, and 3.8% using both methods. All students have watched recorded lectures, and most (88.5%) have used video-accelerating technologies. When using accelerated, video-recorded lecture as opposed to attending lecture, students felt they were more likely to increase their speed of knowledge acquisition (79.3% of students), look up additional information (67.7%), stay focused (64.8%), and learn more (63.7%). Live attendance remains the predominant method for viewing lectures. However, students find accelerated, video-recorded lectures equally or more valuable. Although educators may be uncomfortable with the fundamental change in the learning process represented by video-recorded lecture use, students' responses indicate that their decisions to attend lectures or view recorded lectures are motivated primarily by a desire to satisfy their professional goals. A challenge remains for educators to incorporate technologies students find useful while creating an interactive learning culture.

  2. In Pursuit of Reciprocity: Researchers, Teachers, and School Reformers Engaged in Collaborative Analysis of Video Records

    Science.gov (United States)

    Curry, Marnie W.

    2012-01-01

    In the ideal, reciprocity in qualitative inquiry occurs when there is give-and-take between researchers and the researched; however, the demands of the academy and resource constraints often make the pursuit of reciprocity difficult. Drawing on two video-based, qualitative studies in which researchers utilized video records as resources to enhance…

  3. 75 FR 63434 - Availability of Compliance Guide for the Use of Video or Other Electronic Monitoring or Recording...

    Science.gov (United States)

    2010-10-15

    ...] Availability of Compliance Guide for the Use of Video or Other Electronic Monitoring or Recording Equipment in... the availability of a compliance guide on the use of video or other electronic monitoring or recording... providing this draft guide to advise establishments that video or other electronic monitoring or recording...

  4. An experimental digital consumer recorder for MPEG-coded video signals

    NARCIS (Netherlands)

    Saeijs, R.W.J.J.; With, de P.H.N.; Rijckaert, A.M.A.; Wong, C.

    1995-01-01

    The concept and real-time implementation of an experimental home-use digital recorder is presented, capable of recording MPEG-compressed video signals. The system has small recording mechanics based on the DVC standard and it uses MPEG compression for trick-mode signals as well

  5. Image ranking in video sequences using pairwise image comparisons and temporal smoothing

    CSIR Research Space (South Africa)

    Burke, Michael

    2016-12-01

    Full Text Available The ability to predict the importance of an image is highly desirable in computer vision. This work introduces an image ranking scheme suitable for use in video or image sequences. Pairwise image comparisons are used to determine image ‘interest...

  6. Recent advances in intelligent image search and video retrieval

    CERN Document Server

    2017-01-01

    This book initially reviews the major feature representation and extraction methods and effective learning and recognition approaches, which have broad applications in the context of intelligent image search and video retrieval. It subsequently presents novel methods, such as improved soft assignment coding, Inheritable Color Space (InCS) and the Generalized InCS framework, the sparse kernel manifold learner method, the efficient Support Vector Machine (eSVM), and the Scale-Invariant Feature Transform (SIFT) features in multiple color spaces. Lastly, the book presents clothing analysis for subject identification and retrieval, and performance evaluation methods of video analytics for traffic monitoring. Digital images and videos are proliferating at an amazing speed in the fields of science, engineering and technology, media and entertainment. With the huge accumulation of such data, keyword searches and manual annotation schemes may no longer be able to meet the practical demand for retrieving relevant conte...

  7. Image processor for high resolution video

    International Nuclear Information System (INIS)

    Pessoa, P.P.; Assis, J.T.; Cardoso, S.B.; Lopes, R.T.

    1989-01-01

    In this paper, we discuss an image presentation and processing system developed in Turbo Pascal 5.0 Language. Our system allows the visualization and processing of images in 16 different colors, taken at a time from a set of 64 possible ones. Digital filters of the mean, mediam Laplacian, gradient and histograms equalization type have been implemented, so as to allow a better image quality. Possible applications of our system are also discussed e.g., satellites, computerized tomography, medicine, microscopes. (author) [pt

  8. American video peak store gives fuel a better image

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    A new American image enhancement system using a video peak frame store aims to overcome the common problems of viewing serial numbers on irradiated fuel assemblies within the reactor core whilst reducing operator exposure at the same time. Other nuclear plant inspection applications are envisaged. (author)

  9. The advantages of using photographs and video images in ...

    African Journals Online (AJOL)

    Background: The purpose of this study was to evaluate the advantages of a telephone consultation with a specialist in paediatric surgery after taking photographs and video images by a general practitioner for the diagnosis of some diseases. Materials and Methods: This was a prospective study of the reliability of paediatric ...

  10. Can social tagged images aid concept-based video search?

    NARCIS (Netherlands)

    Setz, A.T.; Snoek, C.G.M.

    2009-01-01

    This paper seeks to unravel whether commonly available social tagged images can be exploited as a training resource for concept-based video search. Since social tags are known to be ambiguous, overly personalized, and often error prone, we place special emphasis on the role of disambiguation. We

  11. Applying Image Matching to Video Analysis

    Science.gov (United States)

    2010-09-01

    image groups, classified by the background scene, are the flag, the kitchen, the telephone, the bookshelf , the title screen, the...Kitchen 136 Telephone 3 Bookshelf 81 Title Screen 10 Map 1 24 Map 2 16 command line. This implementation of a Bloom filter uses two arbitrary...with the Bookshelf images. This scene is a much closer shot than the Kitchen scene so the host occupies much of the background. Algorithms for face

  12. Record Desktop Activity as Streaming Videos for Asynchronous, Video-Based Collaborative Learning.

    Science.gov (United States)

    Chang, Chih-Kai

    As Web-based courses using videos have become popular in recent years, the issue of managing audiovisual aids has become noteworthy. The contents of audiovisual aids may include a lecture, an interview, a featurette, an experiment, etc. The audiovisual aids of Web-based courses are transformed into the streaming format that can make the quality of…

  13. High-speed video capillaroscopy method for imaging and evaluation of moving red blood cells

    Science.gov (United States)

    Gurov, Igor; Volkov, Mikhail; Margaryants, Nikita; Pimenov, Aleksei; Potemkin, Andrey

    2018-05-01

    The video capillaroscopy system with high image recording rate to resolve moving red blood cells with velocity up to 5 mm/s into a capillary is considered. Proposed procedures of the recorded video sequence processing allow evaluating spatial capillary area, capillary diameter and central line with high accuracy and reliability independently on properties of individual capillary. Two-dimensional inter frame procedure is applied to find lateral shift of neighbor images in the blood flow area with moving red blood cells and to measure directly the blood flow velocity along a capillary central line. The developed method opens new opportunities for biomedical diagnostics, particularly, due to long-time continuous monitoring of red blood cells velocity into capillary. Spatio-temporal representation of capillary blood flow is considered. Experimental results of direct measurement of blood flow velocity into separate capillary as well as capillary net are presented and discussed.

  14. Checking Interceptions and Audio Video Recordings by the Court after Referral

    Directory of Open Access Journals (Sweden)

    Sandra Grădinaru

    2012-05-01

    Full Text Available In any event, the prosecutor and the judiciary should pay particular attention to the risk of theirfalsification, which can be achieved by taking only parts of conversations or communications that took place in thepast and are declared to be registered recently, or by removing parts of conversations or communications, or evenby the translation or removal of images. This is why the legislature provided an express provision for theirverification. Provisions of art. 916 Paragraph 1 Criminal Procedure Code offers the possibility of a technicalexpertise regarding the originality and continuity of the records, at the prosecutor's request, the parties or exofficio, where there are doubts about the correctness of the registration in whole or in part, especially if notsupported by all the evidence. Therefore, audio or video recordings serve themselves as evidence in criminalproceedings, if not appealed or confirmed by technical expertise, if there were doubts about their conformity withreality. In the event that there is lack of expertise from the authenticity of records, they will not be accepted asevidence in solving a criminal case, thus eliminating any probative value of the intercepted conversations andcommunications in that case, by applying article 64 Par. 2 Criminal Procedure Code.

  15. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    Science.gov (United States)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  16. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    Science.gov (United States)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  17. Video on the Internet: An introduction to the digital encoding, compression, and transmission of moving image data.

    Science.gov (United States)

    Boudier, T; Shotton, D M

    1999-01-01

    In this paper, we seek to provide an introduction to the fast-moving field of digital video on the Internet, from the viewpoint of the biological microscopist who might wish to store or access videos, for instance in image databases such as the BioImage Database (http://www.bioimage.org). We describe and evaluate the principal methods used for encoding and compressing moving image data for digital storage and transmission over the Internet, which involve compromises between compression efficiency and retention of image fidelity, and describe the existing alternate software technologies for downloading or streaming compressed digitized videos using a Web browser. We report the results of experiments on video microscopy recordings and three-dimensional confocal animations of biological specimens to evaluate the compression efficiencies of the principal video compression-decompression algorithms (codecs) and to document the artefacts associated with each of them. Because MPEG-1 gives very high compression while yet retaining reasonable image quality, these studies lead us to recommend that video databases should store both a high-resolution original version of each video, ideally either uncompressed or losslessly compressed, and a separate edited and highly compressed MPEG-1 preview version that can be rapidly downloaded for interactive viewing by the database user. Copyright 1999 Academic Press.

  18. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Seymour Rowan

    2008-01-01

    Full Text Available Abstract We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  19. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Ji Ming

    2008-03-01

    Full Text Available We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  20. Video retrieval by still-image analysis with ImageMiner

    Science.gov (United States)

    Kreyss, Jutta; Roeper, M.; Alshuth, Peter; Hermes, Thorsten; Herzog, Otthein

    1997-01-01

    The large amount of available multimedia information (e.g. videos, audio, images) requires efficient and effective annotation and retrieval methods. As videos start playing a more important role in the frame of multimedia, we want to make these available for content-based retrieval. The ImageMiner-System, which was developed at the University of Bremen in the AI group, is designed for content-based retrieval of single images by a new combination of techniques and methods from computer vision and artificial intelligence. In our approach to make videos available for retrieval in a large database of videos and images there are two necessary steps: First, the detection and extraction of shots from a video, which is done by a histogram based method and second, the construction of the separate frames in a shot to one still single images. This is performed by a mosaicing-technique. The resulting mosaiced image gives a one image visualization of the shot and can be analyzed by the ImageMiner-System. ImageMiner has been tested on several domains, (e.g. landscape images, technical drawings), which cover a wide range of applications.

  1. MAVIS: Mobile Acquisition and VISualization -\\ud a professional tool for video recording on a mobile platform

    OpenAIRE

    Watten, Phil; Gilardi, Marco; Holroyd, Patrick; Newbury, Paul

    2015-01-01

    Professional video recording is a complex process which often requires expensive cameras and large amounts of ancillary equipment.\\ud With the advancement of mobile technologies, cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera and are often used in professional productions.\\ud However, tools that allow professional users to access the information they need to control the technical ...

  2. Self-Reflection of Video-Recorded High-Fidelity Simulations and Development of Clinical Judgment.

    Science.gov (United States)

    Bussard, Michelle E

    2016-09-01

    Nurse educators are increasingly using high-fidelity simulators to improve prelicensure nursing students' ability to develop clinical judgment. Traditionally, oral debriefing sessions have immediately followed the simulation scenarios as a method for students to connect theory to practice and therefore develop clinical judgment. Recently, video recording of the simulation scenarios is being incorporated. This qualitative, interpretive description study was conducted to identify whether self-reflection on video-recorded high-fidelity simulation (HFS) scenarios helped prelicensure nursing students to develop clinical judgment. Tanner's clinical judgment model was the framework for this study. Four themes emerged from this study: Confidence, Communication, Decision Making, and Change in Clinical Practice. This study indicated that self-reflection of video-recorded HFS scenarios is beneficial for prelicensure nursing students to develop clinical judgment. [J Nurs Educ. 2016;55(9):522-527.]. Copyright 2016, SLACK Incorporated.

  3. Linear array of photodiodes to track a human speaker for video recording

    International Nuclear Information System (INIS)

    DeTone, D; Neal, H; Lougheed, R

    2012-01-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant– the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting–a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  4. Linear array of photodiodes to track a human speaker for video recording

    Science.gov (United States)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  5. Turbulent structure of concentration plumes through application of video imaging

    Energy Technology Data Exchange (ETDEWEB)

    Dabberdt, W.F.; Martin, C. [National Center for Atmospheric Research, Boulder, CO (United States); Hoydysh, W.G.; Holynskyj, O. [Environmental Science & Services Corp., Long Island City, NY (United States)

    1994-12-31

    Turbulent flows and dispersion in the presence of building wakes and terrain-induced local circulations are particularly difficult to simulate with numerical models or measure with conventional fluid modeling and ambient measurement techniques. The problem stems from the complexity of the kinematics and the difficulty in making representative concentration measurements. New laboratory video imaging techniques are able to overcome many of these limitations and are being applied to study a range of difficult problems. Here the authors apply {open_quotes}tomographic{close_quotes} video imaging techniques to the study of the turbulent structure of an ideal elevated plume and the relationship of short-period peak concentrations to long-period average values. A companion paper extends application of the technique to characterization of turbulent plume-concentration fields in the wake of a complex building configuration.

  6. Registration and recognition in images and videos

    CERN Document Server

    Battiato, Sebastiano; Farinella, Giovanni

    2014-01-01

    Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art  research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems.  The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year.This edited volume contains a selection of articles covering some of the talks and tutorials held during the last editions of the school. The chapters provide an in-depth overview o...

  7. A kind of video image digitizing circuit based on computer parallel port

    International Nuclear Information System (INIS)

    Wang Yi; Tang Le; Cheng Jianping; Li Yuanjing; Zhang Binquan

    2003-01-01

    A kind of video images digitizing circuit based on parallel port was developed to digitize the flash x ray images in our Multi-Channel Digital Flash X ray Imaging System. The circuit can digitize the video images and store in static memory. The digital images can be transferred to computer through parallel port and can be displayed, processed and stored. (authors)

  8. Videorec as gameplay: Recording playthroughs and video game engagement

    Directory of Open Access Journals (Sweden)

    Gabriel Menotti

    2014-03-01

    Full Text Available This paper outlines an alternative genealogy of “non-narrative machinima” by the means of tracing a parallel with different cinematographic genres. It analyses the circuit of production and distribution of such material as a field for modes of superplay, in which users both compete and collaborate. Doing so, it proposes that the recording of playthroughs, a practice seemingly secondary to videogame consumption, might constitute an essential part of its culture and development, creating meaningful interfaces between players and industries.

  9. Let's Make a Movie: Investigating Pre-Service Teachers' Reflections on Using Video Recorded Role Playing Cases in Turkey

    Science.gov (United States)

    Koc, Mustafa

    2011-01-01

    This study examined the potential consequences of using student-filmed video cases in the study of classroom management in teacher education. Pre-service teachers in groups were engaged in video-recorded role playing to simulate classroom memoirs. Each group shared their video cases and interpretations in a class presentation. Qualitative data…

  10. Surgeon-Manipulated Live Surgery Video Recording Apparatuses: Personal Experience and Review of Literature.

    Science.gov (United States)

    Kapi, Emin

    2017-06-01

    Visual recording of surgical procedures is a method that is used quite frequently in practices of plastic surgery. While presentations containing photographs are quite common in education seminars and congresses, video-containing presentations find more favour. For this reason, the presentation of surgical procedures in the form of real-time video display has increased especially recently. Appropriate technical equipment for video recording is not available in most hospitals, so there is a need to set up external apparatus in the operating room. Among these apparatuses can be listed such options as head-mounted video cameras, chest-mounted cameras, and tripod-mountable cameras. The head-mounted video camera is an apparatus that is capable of capturing high-resolution and detailed close-up footage. The tripod-mountable camera enables video capturing from a fixed point. Certain user-specific modifications can be made to overcome some of these restrictions. Among these modifications, custom-made applications are one of the most effective solutions. The article makes an attempt to present the features and experiences concerning the use of a combination of a head- or chest-mounted action camera, a custom-made portable tripod apparatus of versatile features, and an underwater camera. The descriptions we used are quite easy-to-assembly, quickly installed, and inexpensive apparatuses that do not require specific technical knowledge and can be manipulated by the surgeon personally in all procedures. The author believes that video recording apparatuses will be integrated more to the operating room, become a standard practice, and become more enabling for self-manipulation by the surgeon in the near future. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  11. Analysis of physiological responses associated with emotional changes induced by viewing video images of dental treatments.

    Science.gov (United States)

    Sekiya, Taki; Miwa, Zenzo; Tsuchihashi, Natsumi; Uehara, Naoko; Sugimoto, Kumiko

    2015-03-30

    Since the understanding of emotional changes induced by dental treatments is important for dentists to provide a safe and comfortable dental treatment, we analyzed physiological responses during watching video images of dental treatments to search for the appropriate objective indices reflecting emotional changes. Fifteen healthy young adult subjects voluntarily participated in the present study. Electrocardiogram (ECG), electroencephalogram (EEG) and corrugator muscle electromyogram (EMG) were recorded and changes of them by viewing videos of dental treatments were analyzed. The subjective discomfort level was acquired by Visual Analog Scale method. Analyses of autonomic nervous activities from ECG and four emotional factors (anger/stress, joy/satisfaction, sadness/depression and relaxation) from EEG demonstrated that increases in sympathetic nervous activity reflecting stress increase and decreases in relaxation level were induced by the videos of infiltration anesthesia and cavity excavation, but not intraoral examination. The corrugator muscle activity was increased by all three images regardless of video contents. The subjective discomfort during watching infiltration anesthesia and cavity excavation was higher than intraoral examination, showing that sympathetic activities and relaxation factor of emotion changed in a manner consistent with subjective emotional changes. These results suggest that measurement of autonomic nervous activities estimated from ECG and emotional factors analyzed from EEG is useful for objective evaluation of subjective emotion.

  12. Computer simulation of orthognathic surgery with video imaging

    Science.gov (United States)

    Sader, Robert; Zeilhofer, Hans-Florian U.; Horch, Hans-Henning

    1994-04-01

    Patients with extreme jaw imbalance must often undergo operative corrections. The goal of therapy is to harmonize the stomatognathic system and an aesthetical correction of the face profile. A new procedure will be presented which supports the maxillo-facial surgeon in planning the operation and which also presents the patient the result of the treatment by video images. Once an x-ray has been digitized it is possible to produce individualized cephalometric analyses. Using a ceph on screen, all current orthognathic operations can be simulated, whereby the bony segments are moved according to given parameters, and a new soft tissue profile can be calculated. The profile of the patient is fed into the computer by way of a video system and correlated to the ceph. Using the simulated operation the computer calculates a new video image of the patient which presents the expected postoperative appearance. In studies of patients treated between 1987-91, 76 out of 121 patients were able to be evaluated. The deviation in profile change varied between .0 and 1.6mm. A side effect of the practical applications was an increase in patient compliance.

  13. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  14. The client’s ideas and fantasies of the supervisor in video recorded psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Jensen, Karen Boelt; Madsen, Ninna Skov

    2010-01-01

    Aim: Despite the current relatively widespread use of video as a supervisory tool, there are few empirical studies on how recordings influence the relationship between client and supervisor. This paper presents a qualitative, explorative study of clients’ experience of having their psychotherapy...

  15. The impact of online video lecture recordings and automated feedback on student performance

    NARCIS (Netherlands)

    Wieling, M. B.; Hofman, W. H. A.

    To what extent a blended learning configuration of face-to-face lectures, online on-demand video recordings of the face-to-face lectures and the offering of online quizzes with appropriate feedback has an additional positive impact on the performance of these students compared to the traditional

  16. Video Multiple Watermarking Technique Based on Image Interlacing Using DWT

    Directory of Open Access Journals (Sweden)

    Mohamed M. Ibrahim

    2014-01-01

    Full Text Available Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  17. Video multiple watermarking technique based on image interlacing using DWT.

    Science.gov (United States)

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  18. Developing an Interface to Order and Document Health Education Videos in the Electronic Health Record.

    Science.gov (United States)

    Wojcik, Lauren

    2015-01-01

    Transitioning to electronic health records (EHRs) provides an opportunity for health care systems to integrate educational content available on interactive patient systems (IPS) with the medical documentation system. This column discusses how one hospital simplified providers' workflow by making it easier to order educational videos and ensure that completed education is documented within the medical record. Integrating the EHR and IPS streamlined the provision of patient education, improved documentation, and supported the organization in meeting core requirements for Meaningful Use.

  19. Large-Scale Query-by-Image Video Retrieval Using Bloom Filters

    OpenAIRE

    Araujo, Andre; Chaves, Jason; Lakshman, Haricharan; Angst, Roland; Girod, Bernd

    2016-01-01

    We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to ...

  20. Video Recording With a GoPro in Hand and Upper Extremity Surgery.

    Science.gov (United States)

    Vara, Alexander D; Wu, John; Shin, Alexander Y; Sobol, Gregory; Wiater, Brett

    2016-10-01

    Video recordings of surgical procedures are an excellent tool for presentations, analyzing self-performance, illustrating publications, and educating surgeons and patients. Recording the surgeon's perspective with high-resolution video in the operating room or clinic has become readily available and advances in software improve the ease of editing these videos. A GoPro HERO 4 Silver or Black was mounted on a head strap and worn over the surgical scrub cap, above the loupes of the operating surgeon. Five live surgical cases were recorded with the camera. The videos were uploaded to a computer and subsequently edited with iMovie or the GoPro software. The optimal settings for both the Silver and Black editions, when operating room lights are used, were determined to be a narrow view, 1080p, 60 frames per second (fps), spot meter on, protune on with auto white balance, exposure compensation at -0.5, and without a polarizing lens. When the operating room lights were not used, it was determined that the standard settings for a GoPro camera were ideal for positioning and editing (4K, 15 frames per second, spot meter and protune off). The GoPro HERO 4 provides high-quality, the surgeon perspective, and a cost-effective video recording of upper extremity surgical procedures. Challenges include finding the optimal settings for each surgical procedure and the length of recording due to battery life limitations. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  1. Evaluating Student Self-Assessment through Video-Recorded Patient Simulations.

    Science.gov (United States)

    Sanderson, Tammy R; Kearney, Rachel C; Kissell, Denise; Salisbury, Jessica

    2016-08-01

    The purpose of this pilot study was to determine if the use of a video-recorded clinical session affects the accuracy of dental hygiene student self-assessment and dental hygiene instructor feedback. A repeated measures experiment was conducted. The use of the ODU 11/12 explorer was taught to students and participating faculty through video and demonstration. Students then demonstrated activation of the explorer on a student partner using the same technique. While faculty completed the student assessment in real time, the sessions were video recorded. After completing the activation of the explorer, students and faculty completed an assessment of the student's performance using a rubric. A week later, both students and faculty viewed the video of the clinical skill performance and reassessed the student's performance using the same rubric. The student videos were randomly assigned a number, so faculty reassessed the performance without access to the student's identity or the score that was initially given. Twenty-eight students and 4 pre-clinical faculty completed the study. Students' average score was 4.68±1.16 on the first assessment and slightly higher 4.89±1.45 when reviewed by video. Faculty average scores were 5.07±2.13 at the first assessment and 4.79±2.54 on the second assessment with the video. No significant differences were found between the differences in overall scores, there was a significant difference in the scores of the grading criteria compared to the expert assessment scores (p=0.0001). This pilot study shows that calibration and assessment without bias in education is a challenge. Analyzing and incorporating new techniques can result in more exact assessment of student performance and self-assessment. Copyright © 2016 The American Dental Hygienists’ Association.

  2. Simultaneous recording of EEG and electromyographic polygraphy increases the diagnostic yield of video-EEG monitoring.

    Science.gov (United States)

    Hill, Aron T; Briggs, Belinda A; Seneviratne, Udaya

    2014-06-01

    To investigate the usefulness of adjunctive electromyographic (EMG) polygraphy in the diagnosis of clinical events captured during long-term video-EEG monitoring. A total of 40 patients (21 women, 19 men) aged between 19 and 72 years (mean 43) investigated using video-EEG monitoring were studied. Electromyographic activity was simultaneously recorded with EEG in four patients selected on clinical grounds. In these patients, surface EMG electrodes were placed over muscles suspected to be activated during a typical clinical event. Of the 40 patients investigated, 24 (60%) were given a diagnosis, whereas 16 (40%) remained undiagnosed. All four patients receiving adjunctive EMG polygraphy obtained a diagnosis, with three of these diagnoses being exclusively reliant on the EMG recordings. Specifically, one patient was diagnosed with propriospinal myoclonus, another patient was diagnosed with facio-mandibular myoclonus, and a third patient was found to have bruxism and periodic leg movements of sleep. The information obtained from surface EMG recordings aided the diagnosis of clinical events captured during video-EEG monitoring in 7.5% of the total cohort. This study suggests that EEG-EMG polygraphy may be used as a technique of improving the diagnostic yield of video-EEG monitoring in selected cases.

  3. The distinguishing motor features of cataplexy: a study from video-recorded attacks.

    Science.gov (United States)

    Pizza, Fabio; Antelmi, Elena; Vandi, Stefano; Meletti, Stefano; Erro, Roberto; Baumann, Christian R; Bhatia, Kailash P; Dauvilliers, Yves; Edwards, Mark J; Iranzo, Alex; Overeem, Sebastiaan; Tinazzi, Michele; Liguori, Rocco; Plazzi, Giuseppe

    2018-05-01

    To describe the motor pattern of cataplexy and to determine its phenomenological differences from pseudocataplexy in the differential diagnosis of episodic falls. We selected 30 video-recorded cataplexy and 21 pseudocataplexy attacks in 17 and 10 patients evaluated for suspected narcolepsy and with final diagnosis of narcolepsy type 1 and conversion disorder, respectively, together with self-reported attacks features, and asked expert neurologists to blindly evaluate the motor features of the attacks. Video documented and self-reported attack features of cataplexy and pseudocataplexy were contrasted. Video-recorded cataplexy can be positively differentiated from pseudocataplexy by the occurrence of facial hypotonia (ptosis, mouth opening, tongue protrusion) intermingled by jerks and grimaces abruptly interrupting laughter behavior (i.e. smile, facial expression) and postural control (head drops, trunk fall) under clear emotional trigger. Facial involvement is present in both partial and generalized cataplexy. Conversely, generalized pseudocataplexy is associated with persistence of deep tendon reflexes during the attack. Self-reported features confirmed the important role of positive emotions (laughter, telling a joke) in triggering the attacks, as well as the more frequent occurrence of partial body involvement in cataplexy compared with pseudocataplexy. Cataplexy is characterized by abrupt facial involvement during laughter behavior. Video recording of suspected cataplexy attacks allows the identification of positive clinical signs useful for diagnosis and, possibly in the future, for severity assessment.

  4. Videos and images from 25 years of teaching compressible flow

    Science.gov (United States)

    Settles, Gary

    2008-11-01

    Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.

  5. Evaluation of video-printer images as secondary CT images for clinical use

    International Nuclear Information System (INIS)

    Doi, K.; Rubin, J.

    1983-01-01

    Video-printer (VP) images of 24 abnormal views from a body CT scanner were made. Although the physical quality of printer images was poor, a group of radiologists and clinicians found that VP images are adequate to confirm the lesion described in the radiology report. The VP images can be used as secondary images, and they can be attached to a report as a part of the radiology service to increase communication between radiologists and clinicians and to prevent the loss of primary images from the radiology file

  6. Revolutionize Propulsion Test Facility High-Speed Video Imaging with Disruptive Computational Photography Enabling Technology

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced rocket propulsion testing requires high-speed video recording that can capture essential information for NASA during rocket engine flight certification...

  7. Efficient image or video encryption based on spatiotemporal chaos system

    International Nuclear Information System (INIS)

    Lian Shiguo

    2009-01-01

    In this paper, an efficient image/video encryption scheme is constructed based on spatiotemporal chaos system. The chaotic lattices are used to generate pseudorandom sequences and then encrypt image blocks one by one. By iterating chaotic maps for certain times, the generated pseudorandom sequences obtain high initial-value sensitivity and good randomness. The pseudorandom-bits in each lattice are used to encrypt the Direct Current coefficient (DC) and the signs of the Alternating Current coefficients (ACs). Theoretical analysis and experimental results show that the scheme has good cryptographic security and perceptual security, and it does not affect the compression efficiency apparently. These properties make the scheme a suitable choice for practical applications.

  8. Using Grounded Theory to Analyze Qualitative Observational Data that is Obtained by Video Recording

    Directory of Open Access Journals (Sweden)

    Colin Griffiths

    2013-06-01

    Full Text Available This paper presents a method for the collection and analysis of qualitative data that is derived by observation and that may be used to generate a grounded theory. Video recordings were made of the verbal and non-verbal interactions of people with severe and complex disabilities and the staff who work with them. Three dyads composed of a student/teacher or carer and a person with a severe or profound intellectual disability were observed in a variety of different activities that took place in a school. Two of these recordings yielded 25 minutes of video, which was transcribed into narrative format. The nature of the qualitative micro data that was captured is described and the fit between such data and classic grounded theory is discussed. The strengths and weaknesses of the use of video as a tool to collect data that is amenable to analysis using grounded theory are considered. The paper concludes by suggesting that using classic grounded theory to analyze qualitative data that is collected using video offers a method that has the potential to uncover and explain patterns of non-verbal interactions that were not previously evident.

  9. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  10. Quantification of Urine Elimination Behaviors in Cats with a Video Recording System

    OpenAIRE

    R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J.M.

    2017-01-01

    Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...

  11. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  12. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  13. Laser image recording on detonation nanodiamond films

    International Nuclear Information System (INIS)

    Mikheev, G M; Mikheev, K G; Mogileva, T N; Puzyr, A P; Bondar, V S

    2014-01-01

    A focused He – Ne laser beam is shown to cause local blackening of semitransparent detonation nanodiamond (DND) films at incident power densities above 600 W cm -2 . Data obtained with a Raman spectrometer and low-power 632.8-nm laser source indicate that the blackening is accompanied by a decrease in broadband background luminescence and emergence of sharp Raman peaks corresponding to the structures of nanodiamond and sp 2 carbon. The feasibility of image recording on DND films by a focused He – Ne laser beam is demonstrated. (letters)

  14. Laser image recording on detonation nanodiamond films

    Energy Technology Data Exchange (ETDEWEB)

    Mikheev, G M; Mikheev, K G; Mogileva, T N [Institute of Mechanics, Ural Branch of the Russian Academy of Sciences, Izhevsk (Russian Federation); Puzyr, A P; Bondar, V S [Institute of Biophysics, Siberian Branch of the Russian Academy of Sciences (Russian Federation)

    2014-01-31

    A focused He – Ne laser beam is shown to cause local blackening of semitransparent detonation nanodiamond (DND) films at incident power densities above 600 W cm{sup -2}. Data obtained with a Raman spectrometer and low-power 632.8-nm laser source indicate that the blackening is accompanied by a decrease in broadband background luminescence and emergence of sharp Raman peaks corresponding to the structures of nanodiamond and sp{sup 2} carbon. The feasibility of image recording on DND films by a focused He – Ne laser beam is demonstrated. (letters)

  15. Improvement of Skills in Cardiopulmonary Resuscitation of Pediatric Residents by Recorded Video Feedbacks.

    Science.gov (United States)

    Anantasit, Nattachai; Vaewpanich, Jarin; Kuptanon, Teeradej; Kamalaporn, Haruitai; Khositseth, Anant

    2016-11-01

    To evaluate the pediatric residents' cardiopulmonary resuscitation (CPR) skills, and their improvements after recorded video feedbacks. Pediatric residents from a university hospital were enrolled. The authors surveyed the level of pediatric resuscitation skill confidence by a questionnaire. Eight psychomotor skills were evaluated individually, including airway, bag-mask ventilation, pulse check, prompt starting and technique of chest compression, high quality CPR, tracheal intubation, intraosseous, and defibrillation. The mock code skills were also evaluated as a team using a high-fidelity mannequin simulator. All the participants attended a concise Pediatric Advanced Life Support (PALS) lecture, and received video-recorded feedback for one hour. They were re-evaluated 6 wk later in the same manner. Thirty-eight residents were enrolled. All the participants had a moderate to high level of confidence in their CPR skills. Over 50 % of participants had passed psychomotor skills, except the bag-mask ventilation and intraosseous skills. There was poor correlation between their confidence and passing the psychomotor skills test. After course feedback, the percentage of high quality CPR skill in the second course test was significantly improved (46 % to 92 %, p = 0.008). The pediatric resuscitation course should still remain in the pediatric resident curriculum and should be re-evaluated frequently. Video-recorded feedback on the pitfalls during individual CPR skills and mock code case scenarios could improve short-term psychomotor CPR skills and lead to higher quality CPR performance.

  16. Analyzing communication skills of Pediatric Postgraduate Residents in Clinical Encounter by using video recordings.

    Science.gov (United States)

    Bari, Attia; Khan, Rehan Ahmed; Jabeen, Uzma; Rathore, Ahsan Waheed

    2017-01-01

    To analyze communication skills of pediatric postgraduate residents in clinical encounter by using video recordings. This qualitative exploratory research was conducted through video recording at The Children's Hospital Lahore, Pakistan. Residents who had attended the mandatory communication skills workshop offered by CPSP were included. The video recording of clinical encounter was done by a trained audiovisual person while the resident was interacting with the patient in the clinical encounter. Data was analyzed by thematic analysis. Initially on open coding 36 codes emerged and then through axial and selective coding these were condensed to 17 subthemes. Out of these four main themes emerged: (1) Courteous and polite attitude, (2) Marginal nonverbal communication skills, (3) Power game/Ignoring child participation and (4) Patient as medical object/Instrumental behaviour. All residents treated the patient as a medical object to reach a right diagnosis and ignored them as a human being. There was dominant role of doctors and marginal nonverbal communication skills were displayed by the residents in the form of lack of social touch, and appropriate eye contact due to documenting notes. A brief non-medical interaction for rapport building at the beginning of interaction was missing and there was lack of child involvement. Paediatric postgraduate residents were polite while communicating with parents and child but lacking in good nonverbal communication skills. Communication pattern in our study was mostly one-way showing doctor's instrumental behaviour and ignoring the child participation.

  17. Design of a system based on DSP and FPGA for video recording and replaying

    Science.gov (United States)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA

  18. An introduction to video image compression and authentication technology for safeguards applications

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1995-01-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970's. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images

  19. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    Science.gov (United States)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  20. Cryptanalysis of a spatiotemporal chaotic image/video cryptosystem

    International Nuclear Information System (INIS)

    Rhouma, Rhouma; Belghith, Safya

    2008-01-01

    This Letter proposes two different attacks on a recently proposed chaotic cryptosystem for images and videos in [S. Lian, Chaos Solitons Fractals (2007), (doi: 10.1016/j.chaos.2007.10.054)]. The cryptosystem under study displays weakness in the generation of the keystream. The encryption is made by generating a keystream mixed with blocks generated from the plaintext and the ciphertext in a CBC mode design. The so obtained keystream remains unchanged for every encryption procedure. Guessing the keystream leads to guessing the key. Two possible attacks are then able to break the whole cryptosystem based on this drawback in generating the keystream. We propose also to change the description of the cryptosystem to be robust against the described attacks by making it in a PCBC mode design

  1. Degraded visual environment image/video quality metrics

    Science.gov (United States)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  2. Digital Path Approach Despeckle Filter for Ultrasound Imaging and Video

    Directory of Open Access Journals (Sweden)

    Marek Szczepański

    2017-01-01

    Full Text Available We propose a novel filtering technique capable of reducing the multiplicative noise in ultrasound images that is an extension of the denoising algorithms based on the concept of digital paths. In this approach, the filter weights are calculated taking into account the similarity between pixel intensities that belongs to the local neighborhood of the processed pixel, which is called a path. The output of the filter is estimated as the weighted average of pixels connected by the paths. The way of creating paths is pivotal and determines the effectiveness and computational complexity of the proposed filtering design. Such procedure can be effective for different types of noise but fail in the presence of multiplicative noise. To increase the filtering efficiency for this type of disturbances, we introduce some improvements of the basic concept and new classes of similarity functions and finally extend our techniques to a spatiotemporal domain. The experimental results prove that the proposed algorithm provides the comparable results with the state-of-the-art techniques for multiplicative noise removal in ultrasound images and it can be applied for real-time image enhancement of video streams.

  3. Image quality assessment for video stream recognition systems

    Science.gov (United States)

    Chernov, Timofey S.; Razumnuy, Nikita P.; Kozharinov, Alexander S.; Nikolaev, Dmitry P.; Arlazarov, Vladimir V.

    2018-04-01

    Recognition and machine vision systems have long been widely used in many disciplines to automate various processes of life and industry. Input images of optical recognition systems can be subjected to a large number of different distortions, especially in uncontrolled or natural shooting conditions, which leads to unpredictable results of recognition systems, making it impossible to assess their reliability. For this reason, it is necessary to perform quality control of the input data of recognition systems, which is facilitated by modern progress in the field of image quality evaluation. In this paper, we investigate the approach to designing optical recognition systems with built-in input image quality estimation modules and feedback, for which the necessary definitions are introduced and a model for describing such systems is constructed. The efficiency of this approach is illustrated by the example of solving the problem of selecting the best frames for recognition in a video stream for a system with limited resources. Experimental results are presented for the system for identity documents recognition, showing a significant increase in the accuracy and speed of the system under simulated conditions of automatic camera focusing, leading to blurring of frames.

  4. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  5. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  6. Observing the Testing Effect using Coursera Video-recorded Lectures: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Paul Zhihao eYONG

    2016-01-01

    Full Text Available We investigated the testing effect in Coursera video-based learning. One hundred and twenty-three participants either (a studied an instructional video-recorded lecture four times, (b studied the lecture three times and took one recall test, or (c studied the lecture once and took three tests. They then took a final recall test, either immediately or a week later, through which their learning was assessed. Whereas repeated studying produced better recall performance than did repeated testing when the final test was administered immediately, testing produced better performance when the final test was delayed until a week after. The testing effect was observed using Coursera lectures. Future directions are documented.

  7. Blur Quantification of Medical Images: Dicom Media, Whole Slide Images, Generic Images and Videos

    Directory of Open Access Journals (Sweden)

    D. Ameisen

    2016-10-01

    platform. The focus map may be displayed on the web interface next to the thumbnail link to the WSI, or in the viewer as a semi-transparent layer over the WSI, or over the WSI map. During the test phase and first integrations in laboratories and hospitals as well as in the FlexMIm project, more than 5000 whole slide images of multiple formats (Hamamatsu NDPI, Aperio SVS, Mirax MRXS, JPEG2000 … as well as hundreds of thousands of images of various formats (DICOM, TIFF, PNG, JPEG ... and videos (H264 have been analyzed using our standalone software or our C, C++, Java and Python libraries. Using default or customizable thresholds’ profiles, WSI are sorted as “accepted”, “to review”, “to rescan”. In order to target the samples contained inside each WSI, special attention was paid to detecting blank tiles. Dynamic blank tile detection based on statistical analysis of each WSI was built and successfully validated for all our samples. Results More than 20 trillion pixels have been analyzed at a 3.5 billion pixels per quad-core processor per minute speed rate. Quantified results can be stored in JSON formatted logs or inside a MySQL or MongoDB database or converted to any chosen data structure to be interoperable with existing software, each tile’s result being accessible in addition to the quality map and the global quality results. This solution is easily scalable as images can be stored at different locations, analysis can be distributed amongst local or remote servers, and quantified results can be stored in remote databases.

  8. Heterogeneity image patch index and its application to consumer video summarization.

    Science.gov (United States)

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  9. Overview of image processing tools to extract physical information from JET videos

    Science.gov (United States)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  10. Overview of image processing tools to extract physical information from JET videos

    International Nuclear Information System (INIS)

    Craciunescu, T; Tiseanu, I; Zoita, V; Murari, A; Gelfusa, M

    2014-01-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  11. Classifying Normal and Abnormal Status Based on Video Recordings of Epileptic Patients

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-01-01

    Full Text Available Based on video recordings of the movement of the patients with epilepsy, this paper proposed a human action recognition scheme to detect distinct motion patterns and to distinguish the normal status from the abnormal status of epileptic patients. The scheme first extracts local features and holistic features, which are complementary to each other. Afterwards, a support vector machine is applied to classification. Based on the experimental results, this scheme obtains a satisfactory classification result and provides a fundamental analysis towards the human-robot interaction with socially assistive robots in caring the patients with epilepsy (or other patients with brain disorders in order to protect them from injury.

  12. Frequency identification of vibration signals using video camera image data.

    Science.gov (United States)

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  13. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  14. Video Game Preservation in the UK: A Survey of Records Management Practices

    Directory of Open Access Journals (Sweden)

    Alasdair Bachell

    2014-10-01

    Full Text Available Video games are a cultural phenomenon; a medium like no other that has become one of the largest entertainment sectors in the world. While the UK boasts an enviable games development heritage, it risks losing a major part of its cultural output through an inability to preserve the games that are created by the country’s independent games developers. The issues go deeper than bit rot and other problems that affect all digital media; loss of context, copyright and legal issues, and the throwaway culture of the ‘next’ game all hinder the ability of fans and academics to preserve video games and make them accessible in the future. This study looked at the current attitudes towards preservation in the UK’s independent (‘indie’ video games industry by examining current record-keeping practices and analysing the views of games developers. The results show that there is an interest in preserving games, and possibly a desire to do so, but issues of piracy and cost prevent the industry from undertaking preservation work internally, and from allowing others to assume such responsibility. The recommendation made by this paper is not simply for preservation professionals and enthusiasts to collaborate with the industry, but to do so by advocating the commercial benefits that preservation may offer to the industry.

  15. Video-Recorded Validation of Wearable Step Counters under Free-living Conditions.

    Science.gov (United States)

    Toth, Lindsay P; Park, Susan; Springer, Cary M; Feyerabend, McKenzie D; Steeves, Jeremy A; Bassett, David R

    2018-06-01

    The purpose of this study was to determine the accuracy of 14-step counting methods under free-living conditions. Twelve adults (mean ± SD age, 35 ± 13 yr) wore a chest harness that held a GoPro camera pointed down at the feet during all waking hours for 1 d. The GoPro continuously recorded video of all steps taken throughout the day. Simultaneously, participants wore two StepWatch (SW) devices on each ankle (all programmed with different settings), one activPAL on each thigh, four devices at the waist (Fitbit Zip, Yamax Digi-Walker SW-200, New Lifestyles NL-2000, and ActiGraph GT9X (AG)), and two devices on the dominant and nondominant wrists (Fitbit Charge and AG). The GoPro videos were downloaded to a computer and researchers counted steps using a hand tally device, which served as the criterion method. The SW devices recorded between 95.3% and 102.8% of actual steps taken throughout the day (P > 0.05). Eleven step counting methods estimated less than 100% of actual steps; Fitbit Zip, Yamax Digi-Walker SW-200, and AG with the moving average vector magnitude algorithm on both wrists recorded 71% to 91% of steps (P > 0.05), whereas the activPAL, New Lifestyles NL-2000, and AG (without low-frequency extension (no-LFE), moving average vector magnitude) worn on the hip, and Fitbit Charge recorded 69% to 84% of steps (P 0.05), whereas the AG (LFE) on both wrists and the hip recorded 128% to 220% of steps (P < 0.05). Across all waking hours of 1 d, step counts differ between devices. The SW, regardless of settings, was the most accurate method of counting steps.

  16. Neonatal apneic seizure of occipital lobe origin: continuous video-EEG recording.

    Science.gov (United States)

    Castro Conde, José Ramón; González-Hernández, Tomás; González Barrios, Desiré; González Campo, Candelaria

    2012-06-01

    We present 2 term newborn infants with apneic seizure originating in the occipital lobe that was diagnosed by video-EEG. One infant had ischemic infarction in the distribution of the posterior cerebral artery, extending to the cingulate gyrus. In the other infant, only transient occipital hyperechogenicity was observed by using neurosonography. In both cases, although the critical EEG discharge was observed at the occipital level, the infants presented no clinical manifestations. In patient 1, the discharge extended to the temporal lobe first, with subtle motor manifestations and tachycardia, then synchronously to both hemispheres (with bradypnea/hypopnea), and the background EEG activity became suppressed, at which point the infant experienced apnea. In patient 2, background EEG activity became suppressed right at the end of the focal discharge, coinciding with the appearance of apnea. In neither case did the clinical description by observers coincide with video-EEG findings. The existence of connections between the posterior limbic cortex and the temporal lobe and midbrain respiratory centers may explain the clinical symptoms recorded in these 2 cases. The novel features reported here include video-EEG capture of apneic seizure, ischemic lesion in the territory of the posterior cerebral artery as the cause of apneic seizure, and the appearance of apnea when the epileptiform ictal discharge extended to other cerebral areas or when EEG activity became suppressed. To date, none of these clinical findings have been previously reported. We believe this pathology may in fact be fairly common, but that video-EEG monitoring is essential for diagnosis.

  17. Video as a Metaphorical Eye: Images of Positionality, Pedagogy, and Practice

    Science.gov (United States)

    Hamilton, Erica R.

    2012-01-01

    Considered by many to be cost-effective and user-friendly, video technology is utilized in a multitude of contexts, including the university classroom. One purpose, although not often used, involves recording oneself teaching. This autoethnographic study focuses on the author's use of video and reflective practice in order to capture and examine…

  18. Individualized music played for agitated patients with dementia: analysis of video-recorded sessions.

    Science.gov (United States)

    Ragneskog, H; Asplund, K; Kihlgren, M; Norberg, A

    2001-06-01

    Many nursing home patients with dementia suffer from symptoms of agitation (e.g. anxiety, shouting, irritability). This study investigated whether individualized music could be used as a nursing intervention to reduce such symptoms in four patients with severe dementia. The patients were video-recorded during four sessions in four periods, including a control period without music, two periods where individualized music was played, and one period where classical music was played. The recordings were analysed by systematic observations and the Facial Action Coding System. Two patients became calmer during some of the individualized music sessions; one patient remained sitting in her armchair longer, and the other patient stopped shouting. For the two patients who were most affected by dementia, the noticeable effect of music was minimal. If the nursing staff succeed in discovering the music preferences of an individual, individualized music may be an effective nursing intervention to mitigate anxiety and agitation for some patients.

  19. A video imaging system and related control hardware for nuclear safeguards surveillance applications

    International Nuclear Information System (INIS)

    Whichello, J.V.

    1987-03-01

    A novel video surveillance system has been developed for safeguards applications in nuclear installations. The hardware was tested at a small experimental enrichment facility located at the Lucas Heights Research Laboratories. The system uses digital video techniques to store, encode and transmit still television pictures over the public telephone network to a receiver located in the Australian Safeguards Office at Kings Cross, Sydney. A decoded, reconstructed picture is then obtained using a second video frame store. A computer-controlled video cassette recorder is used automatically to archive the surveillance pictures. The design of the surveillance system is described with examples of its operation

  20. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method...

  1. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    and subjective results on JPEG compressed images, as well as MJPEG and H.264/AVC compressed video, indicate that the proposed algorithms employing directional and spatial fuzzy filters achieve better artifact reduction than other methods. In particular, robust improvements with H.264/AVC video have been gained...

  2. What do we do with all this video? Better understanding public engagement for image and video annotation

    Science.gov (United States)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  3. Binocular video ophthalmoscope for simultaneous recording of sequences of the human retina to compare dynamic parameters

    Science.gov (United States)

    Tornow, Ralf P.; Milczarek, Aleksandra; Odstrcilik, Jan; Kolar, Radim

    2017-07-01

    A parallel video ophthalmoscope was developed to acquire short video sequences (25 fps, 250 frames) of both eyes simultaneously with exact synchronization. Video sequences were registered off-line to compensate for eye movements. From registered video sequences dynamic parameters like cardiac cycle induced reflection changes and eye movements can be calculated and compared between eyes.

  4. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents' Perspectives.

    Science.gov (United States)

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O'Connor, Alexander; Collins, Michael J

    This study examined adolescents' attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one's attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players' attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents' social cognitive judgments.

  5. A video-image study of electrolytic flow structure in parallel electric-magnetic fields

    International Nuclear Information System (INIS)

    Gu, Z.H.; Fahidy, T.Z.

    1987-01-01

    The structure of free convective flow propagating from a vertical cathode into the electrolyte bulk has been studied via video-imaging. The enhancing effect of imposed horizontal uniform magnetic fields is manifest by vortex propagation and bifurcating flow

  6. Gas expulsions and biological activity recorded offshore Molene Island, Brittany (France): video supervised recording of OBS data and analogue modelling

    Science.gov (United States)

    Klingelhoefer, F.; Géli, L.; Dellong, D.; Evangelia, B.; Tary, J. B.; Bayrakci, G.; Lantéri, N.; Lin, J. Y.; Chen, Y. F.; Chang, E. T. Y.

    2016-12-01

    Ocean bottom seismometers (OBS) commonly record signals from Short Duration Events (SDEs), having characteristics that are very different from those produced by tectonic earthquakes, e.g.: durations Brittany within the field of view of the EMSO-Molene underwater observatory, at a water depth of 12 m. The camera images and the recordings reveal the presence of crabs, octopus and several species of fish. Other acoustic signals can be related to the presence of moving algae or the influence from bad weather. Tides produce characteristic curves in the noise recorded on the geophones. SDEs have been recorded on both instruments, that may well have been caused by gas expulsions from the seabed into the water. In order to verify this hypothesis, an aquarium was filled with water overlying an even grain-sized quartz sand layer. A constant air supply through a narrow tube produced gas bubbles in a regular manner and an immersed ocean bottom geophone recorded the resulting acoustic signals. The bubbles tend to have a uniform size and to produce a waveform very close to those found on the OBSs. By comparing the number of SDEs and the volume of escaped air, estimates can be made regarding the volume of gas escaping the seafloor in different environments.

  7. Simultaneous recordings of human microsaccades and drifts with a contemporary video eye tracker and the search coil technique.

    Directory of Open Access Journals (Sweden)

    Michael B McCamy

    Full Text Available Human eyes move continuously, even during visual fixation. These "fixational eye movements" (FEMs include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs.

  8. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  9. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  10. Investigating interactional competence using video recordings in ESL classrooms to enhance communication

    Science.gov (United States)

    Krishnasamy, Hariharan N.

    2016-08-01

    Interactional competence, or knowing and using the appropriate skills for interaction in various communication situations within a given speech community and culture is important in the field of business and professional communication [1], [2]. Similar to many developing countries in the world, Malaysia is a growing economy and undergraduates will have to acquire appropriate communication skills. In this study, two aspects of the interactional communicative competence were investigated, that is the linguistic and paralinguistic behaviors in small group communication as well as conflict management in small group communication. Two groups of student participants were given a problem-solving task based on a letter of complaint. The two groups of students were video recorded during class hours for 40 minutes. The videos and transcription of the group discussions were analyzed to examine the use of language and interaction in small groups. The analysis, findings and interpretations were verified with three lecturers in the field of communication. The results showed that students were able to accomplish the given task using verbal and nonverbal communication. However, participation was unevenly distributed with two students talking for less than a minute. Negotiation was based more on alternative views and consensus was easily achieved. In concluding, suggestions are given on ways to improve English language communication.

  11. Moving object detection in top-view aerial videos improved by image stacking

    Science.gov (United States)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  12. High speed video recording system on a chip for detonation jet engine testing

    Directory of Open Access Journals (Sweden)

    Samsonov Alexander N.

    2018-01-01

    Full Text Available This article describes system on a chip development for high speed video recording purposes. Current research was started due to difficulties in selection of FPGAs and CPUs which include wide bandwidth, high speed and high number of multipliers for real time signal analysis implementation. Current trend of high density silicon device integration will result soon in a hybrid sensor-controller-memory circuit packed in a single chip. This research was the first step in a series of experiments in manufacturing of hybrid devices. The current task is high level syntheses of high speed logic and CPU core in an FPGA. The work resulted in FPGA-based prototype implementation and examination.

  13. Fractal measures of video-recorded trajectories can classify motor subtypes in Parkinson's Disease

    Science.gov (United States)

    Figueiredo, Thiago C.; Vivas, Jamile; Peña, Norberto; Miranda, José G. V.

    2016-11-01

    Parkinson's Disease is one of the most prevalent neurodegenerative diseases in the world and affects millions of individuals worldwide. The clinical criteria for classification of motor subtypes in Parkinson's Disease are subjective and may be misleading when symptoms are not clearly identifiable. A video recording protocol was used to measure hand tremor of 14 individuals with Parkinson's Disease and 7 healthy subjects. A method for motor subtype classification was proposed based on the spectral distribution of the movement and compared with the existing clinical criteria. Box-counting dimension and Hurst Exponent calculated from the trajectories were used as the relevant measures for the statistical tests. The classification based on the power-spectrum is shown to be well suited to separate patients with and without tremor from healthy subjects and could provide clinicians with a tool to aid in the diagnosis of patients in an early stage of the disease.

  14. Simultaneous recordings of ocular microtremor and microsaccades with a piezoelectric sensor and a video-oculography system

    Directory of Open Access Journals (Sweden)

    Michael B. McCamy

    2013-02-01

    Full Text Available Our eyes are in continuous motion. Even when we attempt to fix our gaze, we produce so called “fixational eye movements”, which include microsaccades, drift, and ocular microtremor (OMT. Microsaccades, the largest and fastest type of fixational eye movement, shift the retinal image from several dozen to several hundred photoreceptors and have equivalent physical characteristics to saccades, only on a smaller scale (Martinez-Conde, Otero-Millan & Macknik, 2013. OMT occurs simultaneously with drift and is the smallest of the fixational eye movements (∼1 photoreceptor width, >0.5 arcmin, with dominant frequencies ranging from 70 Hz to 103 Hz (Martinez-Conde, Macknik & Hubel, 2004. Due to OMT’s small amplitude and high frequency, the most accurate and stringent way to record it is the piezoelectric transduction method. Thus, OMT studies are far rarer than those focusing on microsaccades or drift. Here we conducted simultaneous recordings of OMT and microsaccades with a piezoelectric device and a commercial infrared video tracking system. We set out to determine whether OMT could help to restore perceptually faded targets during attempted fixation, and we also wondered whether the piezoelectric sensor could affect the characteristics of microsaccades. Our results showed that microsaccades, but not OMT, counteracted perceptual fading. We moreover found that the piezoelectric sensor affected microsaccades in a complex way, and that the oculomotor system adjusted to the stress brought on by the sensor by adjusting the magnitudes of microsaccades.

  15. Point-of-View Recording Devices for Intraoperative Neurosurgical Video Capture

    Directory of Open Access Journals (Sweden)

    Jose Luis Porras

    2016-10-01

    Full Text Available AbstractIntroduction: The ability to record and stream neurosurgery is an unprecedented opportunity to further research, medical education, and quality improvement. Here, we appraise the ease of implementation of existing POV devices when capturing and sharing procedures from the neurosurgical operating room, and detail their potential utility in this context.Methods: Our neurosurgical team tested and critically evaluated features of the Google Glass and Panasonic HX-A500 cameras including ergonomics, media quality, and media sharing in both the operating theater and the angiography suite.Results: Existing devices boast several features that facilitate live recording and streaming of neurosurgical procedures. Given that their primary application is not intended for the surgical environment, we identified a number of concrete, yet improvable, limitations.Conclusion: The present study suggests that neurosurgical video capture and live streaming represents an opportunity to contribute to research, education, and quality improvement. Despite this promise, shortcomings render existing devices impractical for serious consideration. We describe the features that future recording platforms should possess to improve upon existing technology.

  16. Video Observations, Atmospheric Path, Orbit and Fragmentation Record of the Fall of the Peekskill Meteorite

    Science.gov (United States)

    Ceplecha, Z.; Brown, P.; Hawkes, R. L.; Wertherill, G.; Beech, M.; Mossman, K.

    1996-02-01

    Large Near-Earth-Asteroids have played a role in modifying the character of the surface geology of the Earth over long time scales through impacts. Recent modeling of the disruption of large meteoroids during atmospheric flight has emphasized the dramatic effects that smaller objects may also have on the Earth's surface. However, comparison of these models with observations has not been possible until now. Peekskill is only the fourth meteorite to have been recovered for which detailed and precise data exist on the meteoroid atmospheric trajectory and orbit. Consequently, there are few constraints on the position of meteorites in the solar system before impact on Earth. In this paper, the preliminary analysis based on 4 from all 15 video recordings of the fireball of October 9, 1992 which resulted in the fall of a 12.4 kg ordinary chondrite (H6 monomict breccia) in Peekskill, New York, will be given. Preliminary computations revealed that the Peekskill fireball was an Earth-grazing event, the third such case with precise data available. The body with an initial mass of the order of 104 kg was in a pre-collision orbit with a = 1.5 AU, an aphelion of slightly over 2 AU and an inclination of 5‡. The no-atmosphere geocentric trajectory would have lead to a perigee of 22 km above the Earth's surface, but the body never reached this point due to tremendous fragmentation and other forms of ablation. The dark flight of the recovered meteorite started from a height of 30 km, when the velocity dropped below 3 km/s, and the body continued 50 km more without ablation, until it hit a parked car in Peekskill, New York with a velocity of about 80 m/s. Our observations are the first video records of a bright fireball and the first motion pictures of a fireball with an associated meteorite fall.

  17. A method for assessing the regional vibratory pattern of vocal folds by analysing the video recording of stroboscopy.

    Science.gov (United States)

    Lee, J S; Kim, E; Sung, M W; Kim, K H; Sung, M Y; Park, K S

    2001-05-01

    Stroboscopy and kymography have been used to examine the motional abnormality of vocal folds and to visualise their regional vibratory pattern. In a previous study (Laryngoscope, 1999), we introduced the conceptual idea of videostrobokymography, in which we applied the concept of kymography on the pre-recorded video images using stroboscopy, and showed its possible clinical application to various disorders in vocal folds. However, a more detailed description about the software and the mathematical formulation used in this system is needed for the reproduction of similar systems. The composition of hardwares, user-interface and detail procedures including mathematical equations in videostrobokymography software is presented in this study. As an initial clinical trial, videostrobokymography was applied to the preoperative and postoperative videostroboscopic images of 15 patients with Reinke's edema. On preoperative examination, videostrobokymograms showed irregular pattern of mucosal wave and, in some patients, a relatively constant glottic gap during phonation. After the operation, the voice quality of all patients was improved in acoustic and aerodynamic assessments, and videostrobokymography showed clearly improved mucosal waves (change in open quotient: mean +/- SD= 0.11 +/- 0.05).

  18. Comparative study of image registration techniques for bladder video-endoscopy

    Science.gov (United States)

    Ben Hamadou, Achraf; Soussen, Charles; Blondel, Walter; Daul, Christian; Wolf, Didier

    2009-07-01

    Bladder cancer is widely spread in the world. Many adequate diagnosis techniques exist. Video-endoscopy remains the standard clinical procedure for visual exploration of the bladder internal surface. However, video-endoscopy presents the limit that the imaged area for each image is about nearly 1 cm2. And, lesions are, typically, spread over several images. The aim of this contribution is to assess the performance of two mosaicing algorithms leading to the construction of panoramic maps (one unique image) of bladder walls. The quantitative comparison study is performed on a set of real endoscopic exam data and on simulated data relative to bladder phantom.

  19. Fast optical recording media based on semiconductor nanostructures for image recording and processing

    International Nuclear Information System (INIS)

    Kasherininov, P. G.; Tomasov, A. A.

    2008-01-01

    Fast optical recording media based on semiconductor nanostructures (CdTe, GaAs) for image recording and processing with a speed to 10 6 cycle/s (which exceeds the speed of known recording media based on metal-insulator-semiconductor-(liquid crystal) (MIS-LC) structures by two to three orders of magnitude), a photosensitivity of 10 -2 V/cm 2 , and a spatial resolution of 5-10 (line pairs)/mm are developed. Operating principles of nanostructures as fast optical recording media and methods for reading images recorded in such media are described. Fast optical processors for recording images in incoherent light based on CdTe crystal nanostructures are implemented. The possibility of their application to fabricate image correlators is shown.

  20. Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.

    Science.gov (United States)

    Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan

    2018-02-27

    In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights into how to construct such a device using off-the-shelf components. © 2018. Published by The Company of Biologists Ltd.

  1. On-Board Video Recording Unravels Bird Behavior and Mortality Produced by High-Speed Trains

    Directory of Open Access Journals (Sweden)

    Eladio L. García de la Morena

    2017-10-01

    Full Text Available Large high-speed railway (HSR networks are planned for the near future to accomplish increased transport demand with low energy consumption. However, high-speed trains produce unknown avian mortality due to birds using the railway and being unable to avoid approaching trains. Safety and logistic difficulties have precluded until now mortality estimation in railways through carcass removal, but information technologies can overcome such problems. We present the results obtained with an experimental on-board system to record bird-train collisions composed by a frontal recording camera, a GPS navigation system and a data storage unit. An observer standing in the cabin behind the driver controlled the system and filled out a form with data of collisions and bird observations in front of the train. Photographs of the train front taken before and after each journey were used to improve the record of killed birds. Trains running the 321.7 km line between Madrid and Albacete (Spain at speeds up to 250–300 km/h were equipped with the system during 66 journeys along a year, totaling approximately 14,700 km of effective recording. The review of videos produced 1,090 bird observations, 29.4% of them corresponding to birds crossing the infrastructure under the catenary and thus facing collision risk. Recordings also showed that 37.7% bird crossings were of animals resting on some element of the infrastructure moments before the train arrival, and that the flight initiation distance of birds (mean ± SD was between 60 ± 33 m (passerines and 136 ± 49 m (raptors. Mortality in the railway was estimated to be 60.5 birds/km year on a line section with 53 runs per day and 26.1 birds/km year in a section with 25 runs per day. Our results are the first published estimation of bird mortality in a HSR and show the potential of information technologies to yield useful data for monitoring the impact of trains on birds via on-board recording systems. Moreover

  2. Energy use of televisions and video cassette recorders in the U.S.

    Energy Technology Data Exchange (ETDEWEB)

    Meier, Alan; Rosen, Karen

    1999-03-01

    In an effort to more accurately determine nationwide energy consumption, the U.S. Department of Energy has recently commissioned studies with the goal of improving its understanding of the energy use of appliances in the miscellaneous end-use category. This study presents an estimate of the residential energy consumption of two of the most common domestic appliances in the miscellaneous end-use category: color televisions (TVs) and video cassette recorders (VCRs). The authors used a bottom-up approach in estimating national TV and VCR energy consumption. First, they obtained estimates of stock and usage from national surveys, while TV and VCR power measurements and other data were recorded at repair and retail shops. Industry-supplied shipment and sales distributions were then used to minimize bias in the power measurement samples. To estimate national TV and VCR energy consumption values, ranges of power draw and mode usage were created to represent situations in homes with more than one unit. Average energy use values for homes with one unit, two units, etc. were calculated and summed to provide estimates of total national TV and VCR energy consumption.

  3. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    Science.gov (United States)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  4. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  5. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  6. Video Surveillance of Epilepsy Patients using Color Image Processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Alving, Jørgen

    2007-01-01

    This report introduces a method for tracking of patients under video surveillance based on a marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lightning issues and other movi...

  7. Video surveillance of epilepsy patients using color image processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Vilic, Adnan

    2014-01-01

    This paper introduces a method for tracking patients under video surveillance based on a color marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lighting issues and other mov...

  8. Thinking Images: Doing Philosophy in Film and Video

    Science.gov (United States)

    Parkes, Graham

    2009-01-01

    Over the past several decades film and video have been steadily infiltrating the philosophy curriculum at colleges and universities. Traditionally, teachers of philosophy have not made much use of "audiovisual aids" in the classroom beyond the chalk board or overhead projector, with only the more adventurous playing audiotapes, for example, or…

  9. Authenticity techniques for PACS images and records

    Science.gov (United States)

    Wong, Stephen T. C.; Abundo, Marco; Huang, H. K.

    1995-05-01

    Along with the digital radiology environment supported by picture archiving and communication systems (PACS) comes a new problem: How to establish trust in multimedia medical data that exist only in the easily altered memory of a computer. Trust is characterized in terms of integrity and privacy of digital data. Two major self-enforcing techniques can be used to assure the authenticity of electronic images and text -- key-based cryptography and digital time stamping. Key-based cryptography associates the content of an image with the originator using one or two distinct keys and prevents alteration of the document by anyone other than the originator. A digital time stamping algorithm generates a characteristic `digital fingerprint' for the original document using a mathematical hash function, and checks that it has not been modified. This paper discusses these cryptographic algorithms and their appropriateness for a PACS environment. It also presents experimental results of cryptographic algorithms on several imaging modalities.

  10. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    Science.gov (United States)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  11. Application of video imaging for improvement of patient set-up

    International Nuclear Information System (INIS)

    Ploeger, Lennert S.; Frenay, Michel; Betgen, Anja; Bois, Josien A. de; Gilhuijs, Kenneth G.A.; Herk, Marcel van

    2003-01-01

    Background and purpose: For radiotherapy of prostate cancer, the patient is usually positioned in the left-right (LR) direction by aligning a single marker on the skin with the projection of a room laser. The aim of this study is to investigate the feasibility of a room-mounted video camera in combination with previously acquired CT data to improve patient set-up along the LR axis. Material and methods: The camera was mounted in the treatment room at the caudal side of the patient. For 22 patients with prostate cancer 127 video and portal images were acquired. The set-up error determined by video imaging was found by matching video images with rendered CT images using various techniques. This set-up error was retrospectively compared with the set-up error derived from portal images. It was investigated whether the number of corrections based on portal imaging would decrease if the information obtained from the video images had been used prior to irradiation. Movement of the skin with respect to bone was quantified using an analysis of variance method. Results: The measurement of the set-up error was most accurate for a technique where outlines and groins on the left and right side of the patient were delineated and aligned individually to the corresponding features extracted from the rendered CT image. The standard deviations (SD) of the systematic and random components of the set-up errors derived from the portal images in the LR direction were 1.5 and 2.1 mm, respectively. When the set-up of the patients was retrospectively adjusted based on the video images, the SD of the systematic and random errors decreased to 1.1 and 1.3 mm, respectively. From retrospective analysis, a reduction of the number of set-up corrections (from nine to six corrections) is expected when the set-up would have been adjusted using the video images. The SD of the magnitude of motion of the skin of the patient with respect to the bony anatomy was estimated to be 1.1 mm. Conclusion: Video

  12. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    International Nuclear Information System (INIS)

    Wright, R.; Zander, M.; Brown, S.; Sandoval, D.; Gilpatrick, D.; Gibson, H.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) is discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. (Author) (3 figs., 4 refs.)

  13. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    Science.gov (United States)

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  14. Evaluation of video capture equipment for secondary image acquisition in the PACS.

    Science.gov (United States)

    Sukenobu, Yoshiharu; Sasagaki, Michihiro; Hirabuki, Norio; Naito, Hiroaki; Narumi, Yoshifumi; Inamura, Kiyonari

    2002-01-01

    There are many cases in which picture archiving and communication systems (PACS) are built with old-type existing modalities with no DICOM output. One of the methods for interfacing them to the PACS is to implement video capture (/ frame grabber) equipment. This equipment takes analog video signal output from medical imaging modalities, and amplitude of the video signal is A/D converted and supplied to the PACS. In this report, we measured and evaluated the accuracy at which this video capture equipment could capture the image. From the physical evaluation, we found the pixel values of an original image and its captured image were almost equal in gray level from 20%-90%. The change in the pixel values of a captured image was +/-3 on average. The change of gray level concentration was acceptable and had an average standard deviation of around 0.63. As for resolution, the degradation was observed at the highest physical level. In a subjective evaluation, the evaluation value of the CT image had a grade of 2.81 on the average (the same quality for a reference image was set to a grade of 3.0). Abnormalities in heads, chests, and abdomens were judged not to influence diagnostic accuracy. Some small differences were seen when comparing captured and reference images, but they are recognized as having no influence on the diagnoses.

  15. C-space : Fostering new creative paradigms based on recording and sharing 'casual' videos through the internet

    NARCIS (Netherlands)

    Simoes, Bruno; Aksenov, Petr; Santos, Pedro; Arentze, Theo; De Amicis, Raffaele

    2015-01-01

    A key theme in ubiquitous computing is to create smart environments in which there is seamless integration of people, information, and physical reality. In this manuscript, we describe a set of tools that facilitate the creation of such environments, e,g, a service to transform videos recorded with

  16. Moving object detection in video satellite image based on deep learning

    Science.gov (United States)

    Zhang, Xueyang; Xiang, Junhua

    2017-11-01

    Moving object detection in video satellite image is studied. A detection algorithm based on deep learning is proposed. The small scale characteristics of remote sensing video objects are analyzed. Firstly, background subtraction algorithm of adaptive Gauss mixture model is used to generate region proposals. Then the objects in region proposals are classified via the deep convolutional neural network. Thus moving objects of interest are detected combined with prior information of sub-satellite point. The deep convolution neural network employs a 21-layer residual convolutional neural network, and trains the network parameters by transfer learning. Experimental results about video from Tiantuo-2 satellite demonstrate the effectiveness of the algorithm.

  17. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    Science.gov (United States)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  18. Development of fast video recording of plasma interaction with a lithium limiter on T-11M tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Lazarev, V.B., E-mail: v_lazarev@triniti.ru [SSC RF TRINITI Troitsk, Moscow (Russian Federation); Dzhurik, A.S.; Shcherbak, A.N. [SSC RF TRINITI Troitsk, Moscow (Russian Federation); Belov, A.M. [NRC “Kurchatov Institute”, Moscow (Russian Federation)

    2016-11-15

    Highlights: • The paper presents the results of the study of tokamak plasma interaction with lithium capillary-porous system limiters and PFC by high-speed color camera. • Registration of emission near the target in SOL in neutral lithium light and e-folding length for neutral Lithium measurements. • Registration of effect of MHD instabilities on CPS Lithium limiter. • A sequence of frames shows evolution of lithium bubble on the surface of lithium limiter. • View of filament structure near the plasma edge in ohmic mode. - Abstract: A new high-speed color camera with interference filters was installed for fast video recording of plasma-surface interaction with a Lithium limiter on the base of capillary-porous system (CPS) in T-11M tokamak vessel. The paper presents the results of the study of tokamak plasma interaction (frame exposure time up to 4 μs) with CPS Lithium limiter in a stable stationary phase, unstable regimes with internal disruption and results of processing of the image of the light emission around the probe, i.e. e-folding length for neutral Lithium penetration and e-folding length for Lithium ion flux in SOL region.

  19. Digital video image processing applications to two phase flow measurements

    International Nuclear Information System (INIS)

    Biscos, Y.; Bismes, F.; Hebrard, P.; Lavergne, G.

    1987-01-01

    Liquid spraying is common in various fields (combustion, cooling of hot surfaces, spray drying,...). For two phase flows modeling, it is necessary to test elementary laws (vaporizing drops, equation of motion of drops or bubbles, heat transfer..). For example, the knowledge of the laws related to the behavior of vaporizing liquid drop in a hot airstream and impinging drops on a hot surface is important for two phase flow modeling. In order to test these different laws in elementary cases, the authors developed different measurement techniques, associating video and microcomputers. The test section (built in perpex or glass) is illuminated with a thin sheet of light generated by a 15mW He-Ne laser and appropriate optical arrangement. Drops, bubbles or liquid film are observed at right angle by a video camera synchronised with a microcomputer either directly or with an optical device (lens, telescope, microscope) providing sufficient magnification. Digitizing the video picture in real time associated with an appropriate numerical treatment allows to obtain, in a non interfering way, a lot of informations relative to the pulverisation and the vaporization as function of space and time (drop size distribution; Sauter mean diameter as function of main flow parameters: air velocity, surface tension, temperature; isoconcentration curves, size evolution relative to vaporizing drops, film thickness evolution spreading on a hot surface...)

  20. Clinical requirements for radionuclide imaging and recording

    International Nuclear Information System (INIS)

    McCready, R.; Flower, M.; Royal Marsden Hospital, Sutton

    1985-01-01

    The quality of current nuclear medicine images and display on hard copy makes diagnosis difficult and the interpretation of results by colleagues more difficult than it need be. The solution is to take full advantage of the power of currently available digital computers. It is understandable that the relatively small sales in the nuclear medicine field limits the effort and expense that can put into development. However, it is hoped that if the requirements are defined then advantage can be taken of recent developments in the mass market to incorporate these into nuclear medicine systems at less cost than was previously possible. (orig.) [de

  1. Assessing the Content of YouTube Videos in Educating Patients Regarding Common Imaging Examinations.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Won, Eugene; Doshi, Ankur M

    2016-12-01

    To assess the content of currently available YouTube videos seeking to educate patients regarding commonly performed imaging examinations. After initial testing of possible search terms, the first two pages of YouTube search results for "CT scan," "MRI," "ultrasound patient," "PET scan," and "mammogram" were reviewed to identify educational patient videos created by health organizations. Sixty-three included videos were viewed and assessed for a range of features. Average views per video were highest for MRI (293,362) and mammography (151,664). Twenty-seven percent of videos used a nontraditional format (eg, animation, song, humor). All videos (100.0%) depicted a patient undergoing the examination, 84.1% a technologist, and 20.6% a radiologist; 69.8% mentioned examination lengths, 65.1% potential pain/discomfort, 41.3% potential radiation, 36.5% a radiology report/results, 27.0% the radiologist's role in interpretation, and 13.3% laboratory work. For CT, 68.8% mentioned intravenous contrast and 37.5% mentioned contrast safety. For MRI, 93.8% mentioned claustrophobia, 87.5% noise, 75.0% need to sit still, 68.8% metal safety, 50.0% intravenous contrast, and 0.0% contrast safety. For ultrasound, 85.7% mentioned use of gel. For PET, 92.3% mentioned radiotracer injection, 61.5% fasting, and 46.2% diabetic precautions. For mammography, unrobing, avoiding deodorant, and possible additional images were all mentioned by 63.6%; dense breasts were mentioned by 0.0%. Educational patient videos on YouTube regarding common imaging examinations received high public interest and may provide a valuable patient resource. Videos most consistently provided information detailing the examination experience and less consistently provided safety information or described the presence and role of the radiologist. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  2. PVR system design of advanced video navigation reinforced with audible sound

    NARCIS (Netherlands)

    Eerenberg, O.; Aarts, R.; De With, P.N.

    2014-01-01

    This paper presents an advanced video navigation concept for Personal Video Recording (PVR), based on jointly using the primary image and a Picture-in-Picture (PiP) image, featuring combined rendering of normal-play video fragments with audio and fast-search video. The hindering loss of audio during

  3. Abnormal eating behavior in video-recorded meals in anorexia nervosa.

    Science.gov (United States)

    Gianini, Loren; Liu, Ying; Wang, Yuanjia; Attia, Evelyn; Walsh, B Timothy; Steinglass, Joanna

    2015-12-01

    Eating behavior during meals in anorexia nervosa (AN) has long been noted to be abnormal, but little research has been done carefully characterizing these behaviors. These eating behaviors have been considered pathological, but are not well understood. The current study sought to quantify ingestive and non-ingestive behaviors during a laboratory lunch meal, compare them to the behaviors of healthy controls (HC), and examine their relationships with caloric intake and anxiety during the meal. A standardized lunch meal was video-recorded for 26 individuals with AN and 10 HC. Duration, frequency, and latency of 16 mealtime behaviors were coded using computer software. Caloric intake, dietary energy density (DEDS), and anxiety were also measured. Nine mealtime behaviors were identified that distinguished AN from HC: staring at food, tearing food, nibbling/picking, dissecting food, napkin use, inappropriate utensil use, hand fidgeting, eating latency, and nibbling/picking latency. Among AN, a subset of these behaviors was related to caloric intake and anxiety. These data demonstrate that the mealtime behaviors of patients with AN and HC differ significantly, and some of these behaviors may be associated with food intake and anxiety. These mealtime behaviors may be important treatment targets to improve eating behavior in individuals with AN. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  5. PIZZARO: Forensic analysis and restoration of image and video data

    Czech Academy of Sciences Publication Activity Database

    Kamenický, Jan; Bartoš, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozámský, Adam; Saic, Stanislav; Šroubek, Filip; Šorel, Michal; Zita, Aleš; Zitová, Barbara; Šíma, Z.; Švarc, P.; Hořínek, J.

    2016-01-01

    Roč. 264, č. 1 (2016), s. 153-166 ISSN 0379-0738 R&D Projects: GA MV VG20102013064; GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Image forensic analysis * Image restoration * Image tampering detection * Image source identification Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/kamenicky-0459504.pdf

  6. Application of video recording technology to improve husbandry and reproduction in the carmine bee-eater (Merops n. nubicus).

    Science.gov (United States)

    Ferrie, Gina M; Sky, Christy; Schutz, Paul J; Quinones, Glorieli; Breeding, Shawnlei; Plasse, Chelle; Leighty, Katherine A; Bettinger, Tammie L

    2016-01-01

    Incorporating technology with research is becoming increasingly important to enhance animal welfare in zoological settings. Video technology is used in the management of avian populations to facilitate efficient information collection on aspects of avian reproduction that are impractical or impossible to obtain through direct observation. Disney's Animal Kingdom(®) maintains a successful breeding colony of Northern carmine bee-eaters. This African species is a cavity nester, making their nesting behavior difficult to study and manage in an ex situ setting. After initial research focused on developing a suitable nesting environment, our goal was to continue developing methods to improve reproductive success and increase likelihood of chicks fledging. We installed infrared bullet cameras in five nest boxes and connected them to a digital video recording system, with data recorded continuously through the breeding season. We then scored and summarized nesting behaviors. Using remote video methods of observation provided much insight into the behavior of the birds in the colony's nest boxes. We observed aggression between birds during the egg-laying period, and therefore immediately removed all of the eggs for artificial incubation which completely eliminated egg breakage. We also used observations of adult feeding behavior to refine chick hand-rearing diet and practices. Although many video recording configurations have been summarized and evaluated in various reviews, we found success with the digital video recorder and infrared cameras described here. Applying emerging technologies to cavity nesting avian species is a necessary addition to improving management in and sustainability of zoo avian populations. © 2015 Wiley Periodicals, Inc.

  7. From image captioning to video summary using deep recurrent networks and unsupervised segmentation

    Science.gov (United States)

    Morosanu, Bogdan-Andrei; Lemnaru, Camelia

    2018-04-01

    Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.

  8. On-line video image processing system for real-time neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Fujine, S; Yoneda, K; Kanda, K [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.

    1983-09-15

    The neutron radiography system installed at the E-2 experimental hole of the KUR (Kyoto University Reactor) has been used for some NDT applications in the nuclear field. The on-line video image processing system of this facility is introduced in this paper. A 0.5 mm resolution in images was obtained by using a super high quality TV camera developed for X-radiography viewing a NE-426 neutron-sensitive scintillator. The image of the NE-426 on a CRT can be observed directly and visually, thus many test samples can be sequentially observed when necessary for industrial purposes. The video image signals from the TV camera are digitized, with a 33 ms delay, through a video A/D converter (ADC) and can be stored in the image buffer (32 KB DRAM) of a microcomputer (Z-80) system. The digitized pictures are taken with 16 levels of gray scale and resolved to 240 x 256 picture elements (pixels) on a monochrome CRT, with the capability also to display 16 distinct colors on a RGB video display. The direct image of this system could be satisfactory for penetrating the side plates to test MTR type reactor fuels and for the investigation of moving objects.

  9. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    Science.gov (United States)

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  10. Artifact reduction of compressed images and video combining adaptive fuzzy filtering and directional anisotropic diffusion

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren; Korhonen, Jari

    2011-01-01

    and ringing artifacts, we have applied directional anisotropic diffusion. Besides that, the selection of the adaptive threshold parameter for the diffusion coefficient has also improved the performance of the algorithm. Experimental results on JPEG compressed images as well as MJPEG and H.264 compressed......Fuzzy filtering is one of the recently developed methods for reducing distortion in compressed images and video. In this paper, we combine the powerful anisotropic diffusion equations with fuzzy filtering in order to reduce the impact of artifacts. Based on the directional nature of the blocking...... videos show improvement in artifact reduction of the proposed algorithm over other directional and spatial fuzzy filters....

  11. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  12. Progress in passive submillimeter-wave video imaging

    Science.gov (United States)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2014-06-01

    Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.

  13. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  14. Effect of a Neonatal Resuscitation Course on Healthcare Providers' Performances Assessed by Video Recording in a Low-Resource Setting.

    Science.gov (United States)

    Trevisanuto, Daniele; Bertuola, Federica; Lanzoni, Paolo; Cavallin, Francesco; Matediana, Eduardo; Manzungu, Olivier Wingi; Gomez, Ermelinda; Da Dalt, Liviana; Putoto, Giovanni

    2015-01-01

    We assessed the effect of an adapted neonatal resuscitation program (NRP) course on healthcare providers' performances in a low-resource setting through the use of video recording. A video recorder, mounted to the radiant warmers in the delivery rooms at Beira Central Hospital, Mozambique, was used to record all resuscitations. One-hundred resuscitations (50 before and 50 after participation in an adapted NRP course) were collected and assessed based on a previously published score. All 100 neonates received initial steps; from these, 77 and 32 needed bag-mask ventilation (BMV) and chest compressions (CC), respectively. There was a significant improvement in resuscitation scores in all levels of resuscitation from before to after the course: for "initial steps", the score increased from 33% (IQR 28-39) to 44% (IQR 39-56), pproviders improved after participation in an adapted NRP course. Video recording was well-accepted by the staff, useful for objective assessment of performance during resuscitation, and can be used as an educational tool in a low-resource setting.

  15. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  16. Forensic applications of infrared imaging for the detection and recording of latent evidence.

    Science.gov (United States)

    Lin, Apollo Chun-Yen; Hsieh, Hsing-Mei; Tsai, Li-Chin; Linacre, Adrian; Lee, James Chun-I

    2007-09-01

    We report on a simple method to record infrared (IR) reflected images in a forensic science context. Light sources using ultraviolet light have been used previously in the detection of latent prints, but the use of infrared light has been subjected to less investigation. IR light sources were used to search for latent evidence and the images were captured by either video or using a digital camera with a CCD array sensitive to IR wavelength. Bloodstains invisible to the eye, inks, tire prints, gunshot residue, and charred document on dark background are selected as typical matters that may be identified during a forensic investigation. All the evidence types could be detected and identified using a range of photographic techniques. In this study, a one in eight times dilution of blood could be detected on 10 different samples of black cloth. When using 81 black writing inks, the observation rates were 95%, 88% and 42% for permanent markers, fountain pens and ball-point pens, respectively, on the three kinds of dark cloth. The black particles of gunshot residue scattering around the entrance hole under IR light were still observed at a distance of 60 cm from three different shooting ranges. A requirement of IR reflectivity is that there is a contrast between the latent evidence and the background. In the absence of this contrast no latent image will be detected, which is similar to all light sources. The use of a video camera allows the recording of images either at a scene or in the laboratory. This report highlights and demonstrates the robustness of IR to detect and record the presence of latent evidence.

  17. The next generation borescope -- Video imaging measurement systems as portable as a fiberscope

    International Nuclear Information System (INIS)

    Boyd, C.E.

    1994-01-01

    Today, Remote Visual Inspection (RVI) techniques routinely save industry the significant costs associated with unscheduled shutdowns and equipment disassembly by enabling visual inspection of otherwise inaccessible equipment surfaces with instruments called borescopes. Specific applications in the nuclear industry include heat exchangers, condensers, boiler tubes, steam generators, headers, and other general interior surface inspections. While borescope inspections have achieved widespread utility, their potential applicability and value have been limited by their inability to provide dimensional information about the objects seen. This paper presents a simple, but very accurate measurement technique that enables the inspector to make measurements of objects directly from the borescope image. While used effectively since 1990, the technique is designed for a video imaging borescope and has, therefore, not been available for the shorter length fiberscope applications--until now. On June 6, 1993 Welch Allyn introduced the VideoProbe XL, a video imaging borescope that is as portable and affordable as a one meter fiberscope. This breakthrough not only extends video imaging into the rest of the fiberscope world, but opens the door for them to this measurement capability as well

  18. Sequential error concealment for video/images by weighted template matching

    DEFF Research Database (Denmark)

    Koloda, Jan; Østergaard, Jan; Jensen, Søren Holdt

    2012-01-01

    In this paper we propose a novel spatial error concealment algorithm for video and images based on convex optimization. Block-based coding schemes in packet loss environment are considered. Missing macro blocks are sequentially reconstructed by filling them with a weighted set of templates...

  19. Computerized video interaction self-instruction of MR imaging fundamentals utilizing laser disk technology

    International Nuclear Information System (INIS)

    Genberg, R.W.; Javitt, M.C.; Popky, G.L.; Parker, J.A.; Pinkney, M.N.

    1986-01-01

    Interactive computer-assisted self-instruction is emerging as a recognized didactic modality and is now being introduced to teach physicians the physics of MR imaging. The interactive system consists of a PC-compatible computer, a 12-inch laser disk drive, and a high-resolution monitor. The laser disk, capable of storing 54,000 images, is pressed from a previously edited video tape of MR and video images. The interactive approach is achieved through the use of the computer and appropriate software. The software is written to include computer graphics overlays of the laser disk images, to select interactive branching paths (depending on the user's response to directives or questions), and to provide feedback to the user so that he can assess his performance. One of their systems is available for use in the scientific exhibit area

  20. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  1. Development of a Video Network for Efficient Dissemination of the Graphical Images in a Collaborative Environment.

    Directory of Open Access Journals (Sweden)

    Anatoliy Gordonov

    1999-01-01

    Full Text Available Video distribution inside a local area network can impede or even paralyze normal data transmission activities. The problem can be solved, at least for a while, by compression and by increasing bandwidth, but that solution can become excessively costly or otherwise impractical. Moreover, experience indicates that usage quickly expands to test the limits of bandwidth. In this paper we introduce and analyze the architecture of a Hybrid AnalogDigital Video Network (ADViNet which separates video distribution from standard data handling functions. The network preserves the features of a standard digital network and, in addition, provides efficient real-time full-screen video transmission through a separate analog communication medium. A specially developed control and management protocol is discussed. For all practical purposes ADViNet may be used when graphical images have to be distributed among many nodes of a local area network. It relieves the burden of video distribution and allows users to combine efficient video data transmission with normal regular network activities.

  2. System and method for image registration of multiple video streams

    Science.gov (United States)

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  3. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    Science.gov (United States)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  4. Video-recorded simulated patient interactions: can they help develop clinical and communication skills in today's learning environment?

    Science.gov (United States)

    Seif, Gretchen A; Brown, Debora

    2013-01-01

    It is difficult to provide real-world learning experiences for students to master clinical and communication skills. The purpose of this paper is to describe a novel instructional method using self- and peer-assessment, reflection, and technology to help students develop effective interpersonal and clinical skills. The teaching method is described by the constructivist learning theory and incorporates the use of educational technology. The learning activities were incorporated into the pre-clinical didactic curriculum. The students participated in two video-recording assignments and performed self-assessments on each and had a peer-assessment on the second video-recording. The learning activity was evaluated through the self- and peer-assessments and an instructor-designed survey. This evaluation identified several themes related to the assignment, student performance, clinical behaviors and establishing rapport. Overall the students perceived that the learning activities assisted in the development of clinical and communication skills prior to direct patient care. The use of video recordings of a simulated history and examination is a unique learning activity for preclinical PT students in the development of clinical and communication skills.

  5. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  6. LIDAR-INCORPORATED TRAFFIC SIGN DETECTION FROM VIDEO LOG IMAGES OF MOBILE MAPPING SYSTEM

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-06-01

    Full Text Available Mobile Mapping System (MMS simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the

  7. Disembodied perspective: third-person images in GoPro videos

    OpenAIRE

    Bédard, Philippe

    2015-01-01

    Used as much in extreme-sports videos and professional productions as in amateur and home videos, GoPro wearable cameras have become ubiquitous in contemporary moving image culture. During its swift and ongoing rise in popularity, GoPro has also enabled the creation of new and unusual points of view, among which are “third-person images”. This article introduces and defines this particular phenomenon through an approach that deals with both the aesthetic and technical characteristics of the i...

  8. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    Science.gov (United States)

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  9. Preliminary study on effects of 60Co γ-irradiation on video quality and the image de-noising methods

    International Nuclear Information System (INIS)

    Yuan Mei; Zhao Jianbin; Cui Lei

    2011-01-01

    There will be variable noises appear on images in video once the play device irradiated by γ-rays, so as to affect the image clarity. In order to eliminate the image noising, the affection mechanism of γ-irradiation on video-play device was studied in this paper and the methods to improve the image quality with both hardware and software were proposed by use of protection program and de-noising algorithm. The experimental results show that the scheme of video de-noising based on hardware and software can improve effectively the PSNR by 87.5 dB. (authors)

  10. The challenge associated with the robust computation of meteor velocities from video and photographic records

    Science.gov (United States)

    Egal, A.; Gural, P. S.; Vaubaillon, J.; Colas, F.; Thuillot, W.

    2017-09-01

    The CABERNET project was designed to push the limits for obtaining accurate measurements of meteoroids orbits from photographic and video meteor camera recordings. The discrepancy between the measured and theoretic orbits of these objects heavily depends on the semi-major axis determination, and thus on the reliability of the pre-atmospheric velocity computation. With a spatial resolution of 0.01° per pixel and a temporal resolution of up to 10 ms, CABERNET should be able to provide accurate measurements of velocities and trajectories of meteors. To achieve this, it is necessary to improve the precision of the data reduction processes, and especially the determination of the meteor's velocity. In this work, most of the steps of the velocity computation are thoroughly investigated in order to reduce the uncertainties and error contributions at each stage of the reduction process. The accuracy of the measurement of meteor centroids is established and results in a precision of 0.09 pixels for CABERNET, which corresponds to 3.24‧‧. Several methods to compute the velocity were investigated based on the trajectory determination algorithms described in Ceplecha (1987) and Borovicka (1990), as well as the multi-parameter fitting (MPF) method proposed by Gural (2012). In the case of the MPF, many optimization methods were implemented in order to find the most efficient and robust technique to solve the minimization problem. The entire data reduction process is assessed using simulated meteors, with different geometrical configurations and deceleration behaviors. It is shown that the multi-parameter fitting method proposed by Gural(2012)is the most accurate method to compute the pre-atmospheric velocity in all circumstances. Many techniques that assume constant velocity at the beginning of the path as derived from the trajectory determination using Ceplecha (1987) or Borovicka (1990) can lead to large errors for decelerating meteors. The MPF technique also allows one to

  11. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    Directory of Open Access Journals (Sweden)

    Tominaga Shoji

    2008-01-01

    Full Text Available Abstract The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the most promising research areas in color imaging science. This survey gives an overview about the issues, controversies, and problems of color image science. It focuses on human color vision, perception, and interpretation. It focuses also on acquisition systems, consumer imaging applications, and medical imaging applications. Next it gives a brief overview about the solutions, recommendations, most recent trends, and future trends of color image science. It focuses on color space, appearance models, color difference metrics, and color saliency. It focuses also on color features, color-based object tracking, scene illuminant estimation and color constancy, quality assessment and fidelity assessment, color characterization and calibration of a display device. It focuses on quantization, filtering and enhancement, segmentation, coding and compression, watermarking, and lastly on multispectral color image processing. Lastly, it addresses the research areas which still need addressing and which are the next and future perspectives of color in image and video processing.

  12. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    Directory of Open Access Journals (Sweden)

    Konstantinos N. Plataniotis

    2008-05-01

    Full Text Available The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the most promising research areas in color imaging science. This survey gives an overview about the issues, controversies, and problems of color image science. It focuses on human color vision, perception, and interpretation. It focuses also on acquisition systems, consumer imaging applications, and medical imaging applications. Next it gives a brief overview about the solutions, recommendations, most recent trends, and future trends of color image science. It focuses on color space, appearance models, color difference metrics, and color saliency. It focuses also on color features, color-based object tracking, scene illuminant estimation and color constancy, quality assessment and fidelity assessment, color characterization and calibration of a display device. It focuses on quantization, filtering and enhancement, segmentation, coding and compression, watermarking, and lastly on multispectral color image processing. Lastly, it addresses the research areas which still need addressing and which are the next and future perspectives of color in image and video processing.

  13. Video outside versus video inside the web: do media setting and image size have an impact on the emotion-evoking potential of video?

    NARCIS (Netherlands)

    Verleur, R.; Verhagen, Pleunes Willem; Crawford, Margaret; Simonson, Michael; Lamboy, Carmen

    2001-01-01

    To explore the educational potential of video-evoked affective responses in a Web-based environment, the question was raised whether video in a Web-based environment is experienced differently from video in a traditional context. An experiment was conducted that studied the affect-evoking power of

  14. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  15. Data and videos for ultrafast synchrotron X-ray imaging studies of metal solidification under ultrasound

    Directory of Open Access Journals (Sweden)

    Bing Wang

    2018-04-01

    Full Text Available The data presented in this article are related to the paper entitled ‘Ultrafast synchrotron X-ray imaging studies of microstructure fragmentation in solidification under ultrasound’ [Wang et al., Acta Mater. 144 (2018 505-515]. This data article provides further supporting information and analytical methods, including the data from both experimental and numerical simulation, as well as the Matlab code for processing the X-ray images. Six videos constructed from the processed synchrotron X-ray images are also provided.

  16. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD......) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation in the CIE L*a*b* color space. The metric takes luminance reduction, color distortion and loss...

  17. Mapping (and modeling) physiological movements during EEG-fMRI recordings: the added value of the video acquired simultaneously.

    Science.gov (United States)

    Ruggieri, Andrea; Vaudano, Anna Elisabetta; Benuzzi, Francesca; Serafini, Marco; Gessaroli, Giuliana; Farinelli, Valentina; Nichelli, Paolo Frigio; Meletti, Stefano

    2015-01-15

    During resting-state EEG-fMRI studies in epilepsy, patients' spontaneous head-face movements occur frequently. We tested the usefulness of synchronous video recording to identify and model the fMRI changes associated with non-epileptic movements to improve sensitivity and specificity of fMRI maps related to interictal epileptiform discharges (IED). Categorization of different facial/cranial movements during EEG-fMRI was obtained for 38 patients [with benign epilepsy with centro-temporal spikes (BECTS, n=16); with idiopathic generalized epilepsy (IGE, n=17); focal symptomatic/cryptogenic epilepsy (n=5)]. We compared at single subject- and at group-level the IED-related fMRI maps obtained with and without additional regressors related to spontaneous movements. As secondary aim, we considered facial movements as events of interest to test the usefulness of video information to obtain fMRI maps of the following face movements: swallowing, mouth-tongue movements, and blinking. Video information substantially improved the identification and classification of the artifacts with respect to the EEG observation alone (mean gain of 28 events per exam). Inclusion of physiological activities as additional regressors in the GLM model demonstrated an increased Z-score and number of voxels of the global maxima and/or new BOLD clusters in around three quarters of the patients. Video-related fMRI maps for swallowing, mouth-tongue movements, and blinking were comparable to the ones obtained in previous task-based fMRI studies. Video acquisition during EEG-fMRI is a useful source of information. Modeling physiological movements in EEG-fMRI studies for epilepsy will lead to more informative IED-related fMRI maps in different epileptic conditions. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Bollywood Movie Corpus for Text, Images and Videos

    OpenAIRE

    Madaan, Nishtha; Mehta, Sameep; Saxena, Mayank; Aggarwal, Aditi; Agrawaal, Taneea S; Malhotra, Vrinda

    2017-01-01

    In past few years, several data-sets have been released for text and images. We present an approach to create the data-set for use in detecting and removing gender bias from text. We also include a set of challenges we have faced while creating this corpora. In this work, we have worked with movie data from Wikipedia plots and movie trailers from YouTube. Our Bollywood Movie corpus contains 4000 movies extracted from Wikipedia and 880 trailers extracted from YouTube which were released from 1...

  19. Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk

    DEFF Research Database (Denmark)

    Jongejan, Bart; Paggio, Patrizia; Navarretta, Costanza

    2017-01-01

    This paper is about the automatic annotation of head movements in videos of face-to-face conversations. Manual annotation of gestures is resource consuming, and modelling gesture behaviours in different types of communicative settings requires many types of annotated data. Therefore, developing...

  20. An integrable, web-based solution for easy assessment of video-recorded performances

    DEFF Research Database (Denmark)

    Subhi, Yousif; Todsen, Tobias; Konge, Lars

    2014-01-01

    , and access to this information should be restricted to select personnel. A local software solution may also ease the need for customization to local needs and integration into existing user databases or project management software. We developed an integrable web-based solution for easy assessment of video...

  1. Performance of a video-image-subtraction-based patient positioning system

    International Nuclear Information System (INIS)

    Milliken, Barrett D.; Rubin, Steven J.; Hamilton, Russell J.; Johnson, L. Scott; Chen, George T.Y.

    1997-01-01

    Purpose: We have developed and tested an interactive video system that utilizes image subtraction techniques to enable high precision patient repositioning using surface features. We report quantitative measurements of system performance characteristics. Methods and Materials: Video images can provide a high precision, low cost measure of patient position. Image subtraction techniques enable one to incorporate detailed information contained in the image of a carefully verified reference position into real-time images. We have developed a system using video cameras providing orthogonal images of the treatment setup. The images are acquired, processed and viewed using an inexpensive frame grabber and a PC. The subtraction images provide the interactive guidance needed to quickly and accurately place a patient in the same position for each treatment session. We describe the design and implementation of our system, and its quantitative performance, using images both to measure changes in position, and to achieve accurate setup reproducibility. Results: Under clinical conditions (60 cm field of view, 3.6 m object distance), the position of static, high contrast objects could be measured with a resolution of 0.04 mm (rms) in each of two dimensions. The two-dimensional position could be reproduced using the real-time image display with a resolution of 0.15 mm (rms). Two-dimensional measurement resolution of the head of a patient undergoing treatment for head and neck cancer was 0.1 mm (rms), using a lateral view, measuring the variation in position of the nose and the ear over the course of a single radiation treatment. Three-dimensional repositioning accuracy of the head of a healthy volunteer using orthogonal camera views was less than 0.7 mm (systematic error) with an rms variation of 1.2 mm. Setup adjustments based on the video images were typically performed within a few minutes. The higher precision achieved using the system to measure objects than to reposition

  2. Camac interface for digitally recording infrared camera images

    International Nuclear Information System (INIS)

    Dyer, G.R.

    1986-01-01

    An instrument has been built to store the digital signals from a modified imaging infrared scanner directly in a digital memory. This procedure avoids the signal-to-noise degradation and dynamic range limitations associated with successive analog-to-digital and digital-to-analog conversions and the analog recording method normally used to store data from the scanner. This technique also allows digital data processing methods to be applied directly to recorded data and permits processing and image reconstruction to be done using either a mainframe or a microcomputer. If a suitable computer and CAMAC-based data collection system are already available, digital storage of up to 12 scanner images can be implemented for less than $1750 in materials cost. Each image is stored as a frame of 60 x 80 eight-bit pixels, with an acquisition rate of one frame every 16.7 ms. The number of frames stored is limited only by the available memory. Initially, data processing for this equipment was done on a VAX 11-780, but images may also be displayed on the screen of a microcomputer. Software for setting the displayed gray scale, generating contour plots and false-color displays, and subtracting one image from another (e.g., background suppression) has been developed for IBM-compatible personal computers

  3. MO-A-BRD-06: In Vivo Cherenkov Video Imaging to Verify Whole Breast Irradiation Treatment

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, R; Glaser, A [Dartmouth College, Hanover, NH - New Hampshire (United States); Jarvis, L [Dartmouth-Hitchcock Medical Center, City Of Lebanon, New Hampshire (United States); Gladstone, D [Dartmouth-Hitchcock Medical Center, Hanover, City of Lebanon (Lebanon); Andreozzi, J; Hitchcock, W; Pogue, B [Dartmouth College, Hanover, NH (United States)

    2014-06-15

    Purpose: To show in vivo video imaging of Cherenkov emission (Cherenkoscopy) can be acquired in the clinical treatment room without affecting the normal process of external beam radiation therapy (EBRT). Applications of Cherenkoscopy, such as patient positioning, movement tracking, treatment monitoring and superficial dose estimation, were examined. Methods: In a phase 1 clinical trial, including 12 patients undergoing post-lumpectomy whole breast irradiation, Cherenkov emission was imaged with a time-gated ICCD camera synchronized to the radiation pulses, during 10 fractions of the treatment. Images from different treatment days were compared by calculating the 2-D correlations corresponding to the averaged image. An edge detection algorithm was utilized to highlight biological features, such as the blood vessels. Superficial dose deposited at the sampling depth were derived from the Eclipse treatment planning system (TPS) and compared with the Cherenkov images. Skin reactions were graded weekly according to the Common Toxicity Criteria and digital photographs were obtained for comparison. Results: Real time (fps = 4.8) imaging of Cherenkov emission was feasible and feasibility tests indicated that it could be improved to video rate (fps = 30) with system improvements. Dynamic field changes due to fast MLC motion were imaged in real time. The average 2-D correlation was about 0.99, suggesting the stability of this imaging technique and repeatability of patient positioning was outstanding. Edge enhanced images of blood vessels were observed, and could serve as unique biological markers for patient positioning and movement tracking (breathing). Small discrepancies exists between the Cherenkov images and the superficial dose predicted from the TPS but the former agreed better with actual skin reactions than did the latter. Conclusion: Real time Cherenkoscopy imaging during EBRT is a novel imaging tool that could be utilized for patient positioning, movement tracking

  4. Image-scanning measurement using video dissection cameras

    International Nuclear Information System (INIS)

    Carson, J.S.

    1978-01-01

    A high speed dimensional measuring system capable of scanning a thin film network, and determining if there are conductor widths, resistor widths, or spaces not typical of the design for this product is described. The eye of the system is a conventional TV camera, although such devices as image dissector cameras or solid-state scanners may be used more often in the future. The analog signal from the TV camera is digitized for processing by the computer and is presented to the TV monitor to assist the operator in monitoring the system's operation. Movable stages are required when the field of view of the scanner is less than the size of the object. A minicomputer controls the movement of the stage, and communicates with the digitizer to select picture points that are to be processed. Communications with the system are maintained through a teletype or CRT terminal

  5. Non-technical skills for obstetricians conducting forceps and vacuum deliveries: qualitative analysis by interviews and video recordings.

    Science.gov (United States)

    Bahl, Rachna; Murphy, Deirdre J; Strachan, Bryony

    2010-06-01

    Non-technical skills are cognitive and social skills required in an operational task. These skills have been identified and taught in the surgical domain but are of particular relevance to obstetrics where the patient is awake, the partner is present and the clinical circumstances are acute and often stressful. The aim of this study was to define the non-technical skills of an operative vaginal delivery (forceps or vacuum) to facilitate transfer of skills from expert obstetricians to trainee obstetricians. Qualitative study using interviews and video recordings. The study was conducted at two university teaching hospitals (St. Michael's Hospital, Bristol and Ninewells Hospital, Dundee). Participants included 10 obstetricians and eight midwives identified as experts in conducting or supporting operative vaginal deliveries. Semi-structured interviews were carried out using routine clinical scenarios. The experts were also video recorded conducting forceps and vacuum deliveries in a simulation setting. The interviews and video recordings were transcribed verbatim and analysed using thematic coding. The anonymised data were independently coded by the three researchers and then compared for consistency of interpretation. The experts reviewed the coded data for respondent validation and clarification. The themes that emerged were used to identify the non-technical skills required for conducting an operative vaginal delivery. The final skills list was classified into seven main categories. Four categories (situational awareness, decision making, task management, and team work and communication) were similar to the categories identified in surgery. Three further categories unique to obstetrics were also identified (professional relationship with the woman, maintaining professional behaviour and cross-monitoring of performance). This explicitly defined skills taxonomy could aid trainees' understanding of the non-technical skills to be considered when conducting an operative

  6. Applying GA for Optimizing the User Query in Image and Video Retrieval

    OpenAIRE

    Ehsan Lotfi

    2014-01-01

    In an information retrieval system, the query can be made by user sketch. The new method presented here, optimizes the user sketch and applies the optimized query to retrieval the information. This optimization may be used in Content-Based Image Retrieval (CBIR) and Content-Based Video Retrieval (CBVR) which is based on trajectory extraction. To optimize the retrieval process, one stage of retrieval is performed by the user sketch. The retrieval criterion is based on the proposed distance met...

  7. Exploring Multi-Modal and Structured Representation Learning for Visual Image and Video Understanding

    OpenAIRE

    Xu, Dan

    2018-01-01

    As the explosive growth of the visual data, it is particularly important to develop intelligent visual understanding techniques for dealing with a large amount of data. Many efforts have been made in recent years to build highly effective and large-scale visual processing algorithms and systems. One of the core aspects in the research line is how to learn robust representations to better describe the data. In this thesis we study the problem of visual image and video understanding and specifi...

  8. Correspondence between audio and visual deep models for musical instrument detection in video recordings

    OpenAIRE

    Slizovskaia, Olga; Gómez, Emilia; Haro, Gloria

    2017-01-01

    This work aims at investigating cross-modal connections between audio and video sources in the task of musical instrument recognition. We also address in this work the understanding of the representations learned by convolutional neural networks (CNNs) and we study feature correspondence between audio and visual components of a multimodal CNN architecture. For each instrument category, we select the most activated neurons and investigate exist- ing cross-correlations between neurons from the ...

  9. Validation of a new tool for automatic assessment of tremor frequency from video recordings

    Czech Academy of Sciences Publication Activity Database

    Uhríková, Z.; Šprdlík, Otakar; Hoskovcová, M.; Komárek, A.; Ulmanová, O.; Hlaváč, V.; Nugent, Ch. D.; Růžička, E.

    2011-01-01

    Roč. 198, č. 1 (2011), s. 110-113 ISSN 0165-0270 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : Tremor frequency * essential tremor * video analysis * Fourier transformation * accelerometry Subject RIV: BC - Control Systems Theory Impact factor: 1.980, year: 2011 http://library.utia.cas.cz/separaty/2011/TR/sprdlik-0359324.pdf

  10. Surgical video recording with a modified GoPro Hero 4 camera

    Directory of Open Access Journals (Sweden)

    Lin LK

    2016-01-01

    Full Text Available Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method: The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results: Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion: The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. Keywords: teaching, oculoplastic, strabismus

  11. Surgical video recording with a modified GoPro Hero 4 camera.

    Science.gov (United States)

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  12. A low-cost, high-resolution, video-rate imaging optical radar

    Energy Technology Data Exchange (ETDEWEB)

    Sackos, J.T.; Nellums, R.O.; Lebien, S.M.; Diegert, C.F. [Sandia National Labs., Albuquerque, NM (United States); Grantham, J.W.; Monson, T. [Air Force Research Lab., Eglin AFB, FL (United States)

    1998-04-01

    Sandia National Laboratories has developed a unique type of portable low-cost range imaging optical radar (laser radar or LADAR). This innovative sensor is comprised of an active floodlight scene illuminator and an image intensified CCD camera receiver. It is a solid-state device (no moving parts) that offers significant size, performance, reliability, and simplicity advantages over other types of 3-D imaging sensors. This unique flash LADAR is based on low cost, commercially available hardware, and is well suited for many government and commercial uses. This paper presents an update of Sandia`s development of the Scannerless Range Imager technology and applications, and discusses the progress that has been made in evolving the sensor into a compact, low, cost, high-resolution, video rate Laser Dynamic Range Imager.

  13. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    Science.gov (United States)

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.

  14. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  15. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from year 1999 (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  16. Video Transect Images (1999) from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP) (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  17. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP):Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  18. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  19. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2000 (NODC Accession 0000728)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2000 at 23 sites, some of which had multiple depths. Estimates of substrate...

  20. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  1. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    OpenAIRE

    Tominaga Shoji; Plataniotis KonstantinosN; Trémeau Alain

    2008-01-01

    Abstract The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the mos...

  2. Shifting Weights: Adapting Object Detectors from Image to Video (Author’s Manuscript)

    Science.gov (United States)

    2012-12-08

    Skateboard Sewing Machine Sandwich Figure 1: Images of the “ Skateboard ”, “Sewing machine”, and “Sandwich” classes taken from (top row) ImageNet [7...InitialBL VideoPosBL Our method(nt) Our method(full) Gopalan et al. [18] (PLS) Gopalan et al. [18] (SVM) Skateboard 4.29% 2.89% 10.44% 10.44% 0.04% 0.94...belongs to no event class. We select 6 object classes to learn object detectors for because they are commonly present in selected events: “ Skateboard

  3. A real-time remote video streaming platform for ultrasound imaging.

    Science.gov (United States)

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  4. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  5. Video-rate resonant scanning multiphoton microscopy: An emerging technique for intravital imaging of the tumor microenvironment.

    Science.gov (United States)

    Kirkpatrick, Nathaniel D; Chung, Euiheon; Cook, Daniel C; Han, Xiaoxing; Gruionu, Gabriel; Liao, Shan; Munn, Lance L; Padera, Timothy P; Fukumura, Dai; Jain, Rakesh K

    2012-01-01

    The abnormal tumor microenvironment fuels tumor progression, metastasis, immune suppression, and treatment resistance. Over last several decades, developments in and applications of intravital microscopy have provided unprecedented insights into the dynamics of the tumor microenvironment. In particular, intravital multiphoton microscopy has revealed the abnormal structure and function of tumor-associated blood and lymphatic vessels, the role of aberrant tumor matrix in drug delivery, invasion and metastasis of tumor cells, the dynamics of immune cell trafficking to and within tumors, and gene expression in tumors. However, traditional multiphoton microscopy suffers from inherently slow imaging rates-only a few frames per second, thus unable to capture more rapid events such as blood flow, lymphatic flow, and cell movement within vessels. Here, we report the development and implementation of a video-rate multiphoton microscope (VR-MPLSM) based on resonant galvanometer mirror scanning that is capable of recording at 30 frames per second and acquiring intravital multispectral images. We show that the design of the system can be readily implemented and is adaptable to various experimental models. As examples, we demonstrate the utility of the system to directly measure flow within tumors, capture metastatic cancer cells moving within the brain vasculature and cells in lymphatic vessels, and image acute responses to changes in a vascular network. VR-MPLSM thus has the potential to further advance intravital imaging and provide new insight into the biology of the tumor microenvironment.

  6. System and carrier for optical images and holographic information recording

    International Nuclear Information System (INIS)

    Andries, A.; Bivol, V.; Iovu, M

    2002-01-01

    The invention relates to the semiconducting silverless photography, in particular to the technique for optical information recording and may be used in microphotography for manifacture of microfiches, microfilms, storage disks, i the multiplication and copying technique, in holography, in micro- and optoelectronics, cinematography etc. The system for optical images and holographic information recording includes an optical exposure system, an information carrier , containing a dielectric substrate with the first electrode, a photosensitive element and the second electrode, arranged in consecutive order, a constant and impulse voltage source, a means for climbing and movement of the information carrier, a control unit for connection of the voltage source to the electroconducting strate, a personal computer, connected to the control unit of the recording modes ,to the exposure system and the information carrier, an electrooptical transparency, connected to the computer by means of the matching unit. The carrier for optical images and holographic information recording contains a dielectric substrate, a photosensitive element formed of a layer of the vitreous chalcogenic semiconductor and a layer of the crystalline or amorphous semiconductor, forming a heterojunction, the photosensitive element is arranged between two electrodes , one of which is made transparent , in such case rge layer of the vitreous chalcogenic semiconductor comes into contact with the superior transparent electrode, subjected to exposure

  7. Superimpose of images by appending two simple video amplifier circuits to color television

    International Nuclear Information System (INIS)

    Kojima, Kazuhiko; Hiraki, Tatsunosuke; Koshida, Kichiro; Maekawa, Ryuichi; Hisada, Kinichi.

    1979-01-01

    Images are very useful to obtain diagnostic informations in medical fields. Also by superimposing two or three images obtained from the same patient, various informations, for example a degree of overlapping and anatomical land mark, which can not be found in only one image, can be often found. In this paper characteristics of our trial color television system for the purpose of superimposing x-ray images and/or radionuclide images are described. This color television system superimposing two images in each different color consists of two monochromatic vidicon cameras and 20 inches conventional color television in which only two simple video amplifier circuits are added. Signals from vidicon cameras are amplified about 40 dB and are directly applied to cathode terminals of color CRT in the television. This system is very simple and economical color displays, and enhance a degree of overlapping and displacement between images. As one of typical clinical applications, pancreas images were superimposed in color by this method. As a result, size and position of pancreas was enhanced. Also x-ray image and radionuclide image were superimposed to find exactly the position of tumors. Furthermore this system was very useful for color display of multinuclides scintigraphy. (author)

  8. Superimpose of images by appending two simple video amplifier circuits to color television

    Energy Technology Data Exchange (ETDEWEB)

    Kojima, K; Hiraki, T; Koshida, K; Maekawa, R [Kanazawa Univ. (Japan). School of Paramedicine; Hisada, K

    1979-09-01

    Images are very useful to obtain diagnostic informations in medical fields. Also by superimposing two or three images obtained from the same patient, various informations, for example a degree of overlapping and anatomical land mark, which can not be found in only one image, can be often found. In this paper characteristics of our trial color television system for the purpose of superimposing x-ray images and/or radionuclide images are described. This color television system superimposing two images in each different color consists of two monochromatic vidicon cameras and 20 inches conventional color television in which only two simple video amplifier circuits are added. Signals from vidicon cameras are amplified about 40 dB and are directly applied to cathode terminals of color CRT in the television. This system is very simple and economical color displays, and enhance a degree of overlapping and displacement between images. As one of typical clinical applications, pancreas images were superimposed in color by this method. As a result, size and position of pancreas was enhanced. Also x-ray image and radionuclide image were superimposed to find exactly the position of tumors. Furthermore this system was very useful for color display of multinuclides scintigraphy.

  9. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    Science.gov (United States)

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p competency. Prognostic study, level II.

  10. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol.

    Science.gov (United States)

    Harford, Mirae; Catherall, Jacqueline; Gerry, Stephen; Young, Duncan; Watkinson, Peter

    2017-10-25

    For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. PROSPERO CRD42016029167.

  11. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol

    Directory of Open Access Journals (Sweden)

    Mirae Harford

    2017-10-01

    Full Text Available Abstract Background For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. Methods We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. Discussion To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. Systematic review registration PROSPERO CRD42016029167

  12. INTEGRATION OF VIDEO IMAGES AND CAD WIREFRAMES FOR 3D OBJECT LOCALIZATION

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2012-07-01

    Full Text Available The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT. Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  13. Reliability of Alberta Infant Motor Scale Using Recorded Video Observations Among the Preterm Infants in India: A Reliability Study

    Directory of Open Access Journals (Sweden)

    Veena Kirthika S

    2017-10-01

    Full Text Available Background: Assessment of motor function is a vital characteristic of infant development. Alberta Infant Motor scale (AIMS is considered to be one of the tool available for screening the developmental delays, but this scale was formulated by using western samples. Every country has its own ethnic and cultural background and various differences are observed in the culture and ethnicity. Therefore, there is a need to obtain reliability for the use of AIMS in south Indian population. Purpose: To find the intra-rater and inter-rater reliability of Alberta Infant Motor Scale (AIMS on pre-term infants using the recorded video observations in Indian population. Method: 30 preterm infants in three age groups, 0-3 months (10 infants, 4-7 months (10 infants, 8-18 months (10 infants were recruited for this reliability study. The AIMS was administered to the preterm infants and the performance was videotaped. The performance was then rescored by the same therapist, immediately from the video and on another two consecutive months to estimate intra-rater reliability using ICC (3,1, two-way mixed effects model. For reporting inter-rater reliability, AIMS was scored by three different raters, using ICC (2,k two-way random effects model and by two other therapists to examine the inter and intra-rater reliability. Results: The two-way mixed effects model for intra-rater reliability of AIMS, ICC (3,1 = 0.99 and for reporting inter-rater reliability of AIMS by two-way random effects model, ICC (2,k = 0.96. Conclusion: AIMS has excellent intra and inter-rater reliability using recorded video observations among the preterm infants in India

  14. Computer simulation of radiographic images sharpness in several system of image record

    International Nuclear Information System (INIS)

    Silva, Marcia Aparecida; Schiable, Homero; Frere, Annie France; Marques, Paulo M.A.; Oliveira, Henrique J.Q. de; Alves, Fatima F.R.; Medeiros, Regina B.

    1996-01-01

    A method to predict the influence of the record system on radiographic images sharpness by computer simulation is studied. The method intend to previously show the image to be obtained for each type of film or screen-film combination used during the exposure

  15. Pollen Bearing Honey Bee Detection in Hive Entrance Video Recorded by Remote Embedded System for Pollination Monitoring

    Science.gov (United States)

    Babic, Z.; Pilipovic, R.; Risojevic, V.; Mirjanic, G.

    2016-06-01

    Honey bees have crucial role in pollination across the world. This paper presents a simple, non-invasive, system for pollen bearing honey bee detection in surveillance video obtained at the entrance of a hive. The proposed system can be used as a part of a more complex system for tracking and counting of honey bees with remote pollination monitoring as a final goal. The proposed method is executed in real time on embedded systems co-located with a hive. Background subtraction, color segmentation and morphology methods are used for segmentation of honey bees. Classification in two classes, pollen bearing honey bees and honey bees that do not have pollen load, is performed using nearest mean classifier, with a simple descriptor consisting of color variance and eccentricity features. On in-house data set we achieved correct classification rate of 88.7% with 50 training images per class. We show that the obtained classification results are not far behind from the results of state-of-the-art image classification methods. That favors the proposed method, particularly having in mind that real time video transmission to remote high performance computing workstation is still an issue, and transfer of obtained parameters of pollination process is much easier.

  16. A flexible software architecture for scalable real-time image and video processing applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  17. Visualization index for image-enabled medical records

    Science.gov (United States)

    Dong, Wenjie; Zheng, Weilin; Sun, Jianyong; Zhang, Jianguo

    2011-03-01

    With the widely use of healthcare information technology in hospitals, the patients' medical records are more and more complex. To transform the text- or image-based medical information into easily understandable and acceptable form for human, we designed and developed an innovation indexing method which can be used to assign an anatomical 3D structure object to every patient visually to store indexes of the patients' basic information, historical examined image information and RIS report information. When a doctor wants to review patient historical records, he or she can first load the anatomical structure object and the view the 3D index of this object using a digital human model tool kit. This prototype system helps doctors to easily and visually obtain the complete historical healthcare status of patients, including large amounts of medical data, and quickly locate detailed information, including both reports and images, from medical information systems. In this way, doctors can save time that may be better used to understand information, obtain a more comprehensive understanding of their patients' situations, and provide better healthcare services to patients.

  18. A comparison between flexible electrogoniometers, inclinometers and three-dimensional video analysis system for recording neck movement.

    Science.gov (United States)

    Carnaz, Letícia; Moriguchi, Cristiane S; de Oliveira, Ana Beatriz; Santiago, Paulo R P; Caurin, Glauco A P; Hansson, Gert-Åke; Coury, Helenice J C Gil

    2013-11-01

    This study compared neck range of movement recording using three different methods goniometers (EGM), inclinometers (INC) and a three-dimensional video analysis system (IMG) in simultaneous and synchronized data collection. Twelve females performed neck flexion-extension, lateral flexion, rotation and circumduction. The differences between EGM, INC, and IMG were calculated sample by sample. For flexion-extension movement, IMG underestimated the amplitude by 13%; moreover, EGM showed a crosstalk of about 20% for lateral flexion and rotation axes. In lateral flexion movement, all systems showed similar amplitude and the inter-system differences were moderate (4-7%). For rotation movement, EGM showed a high crosstalk (13%) for flexion-extension axis. During the circumduction movement, IMG underestimated the amplitude of flexion-extension movements by about 11%, and the inter-system differences were high (about 17%) except for INC-IMG regarding lateral flexion (7%) and EGM-INC regarding flexion-extension (10%). For application in workplace, INC presents good results compared to IMG and EGM though INC cannot record rotation. EGM should be improved in order to reduce its crosstalk errors and allow recording of the full neck range of movement. Due to non-optimal positioning of the cameras for recording flexion-extension, IMG underestimated the amplitude of these movements. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. Content-based retrieval in videos from laparoscopic surgery

    Science.gov (United States)

    Schoeffmann, Klaus; Beecks, Christian; Lux, Mathias; Uysal, Merih Seran; Seidl, Thomas

    2016-03-01

    In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures for long-term archival. These endoscopic videos are a good source of information for explanations to patients and follow-up operations. As the endoscope is the "eye of the surgeon", the video shows the same information the surgeon has seen during the operation, and can describe the situation inside the patient much more precisely than an operation report would do. Recorded endoscopic videos can also be used for training young surgeons and in some countries the long-term archival of video recordings from endoscopic procedures is even enforced by law. A major challenge, however, is to efficiently access these very large video archives for later purposes. One problem, for example, is to locate specific images in the videos that show important situations, which are additionally captured as static images during the procedure. This work addresses this problem and focuses on contentbased video retrieval in data from laparoscopic surgery. We propose to use feature signatures, which can appropriately and concisely describe the content of laparoscopic images, and show that by using this content descriptor with an appropriate metric, we are able to efficiently perform content-based retrieval in laparoscopic videos. In a dataset with 600 captured static images from 33 hours recordings, we are able to find the correct video segment for more than 88% of these images.

  20. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  1. Extracting a Good Quality Frontal Face Image from a Low-Resolution Video Sequence

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2011-01-01

    Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images......, we use a learning-based super-resolution algorithm applied to the result of the reconstruction-based part to improve the quality by another factor of two. This results in an improvement factor of four for the entire system. The proposed system has been tested on 122 low-resolution sequences from two...... different databases. The experimental results show that the proposed system can indeed produce a high-resolution and good quality frontal face image from low-resolution video sequences....

  2. Integration of prior knowledge into dense image matching for video surveillance

    Science.gov (United States)

    Menze, M.; Heipke, C.

    2014-08-01

    Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.

  3. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    Science.gov (United States)

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  4. Video-rate or high-precision: a flexible range imaging camera

    Science.gov (United States)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  5. Reliable assessment of general surgeons' non-technical skills based on video-recordings of patient simulated scenarios.

    Science.gov (United States)

    Spanager, Lene; Beier-Holgersen, Randi; Dieckmann, Peter; Konge, Lars; Rosenberg, Jacob; Oestergaard, Doris

    2013-11-01

    Nontechnical skills are essential for safe and efficient surgery. The aim of this study was to evaluate the reliability of an assessment tool for surgeons' nontechnical skills, Non-Technical Skills for Surgeons dk (NOTSSdk), and the effect of rater training. A 1-day course was conducted for 15 general surgeons in which they rated surgeons' nontechnical skills in 9 video recordings of scenarios simulating real intraoperative situations. Data were gathered from 2 sessions separated by a 4-hour training session. Interrater reliability was high for both pretraining ratings (Cronbach's α = .97) and posttraining ratings (Cronbach's α = .98). There was no statistically significant development in assessment skills. The D study showed that 2 untrained raters or 1 trained rater was needed to obtain generalizability coefficients >.80. The high pretraining interrater reliability indicates that videos were easy to rate and Non-Technical Skills for Surgeons dk easy to use. This implies that Non-Technical Skills for Surgeons dk (NOTSSdk) could be an important tool in surgical training, potentially improving safety and quality for surgical patients. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  7. Real-time strategy video game experience and structural connectivity - A diffusion tensor imaging study.

    Science.gov (United States)

    Kowalczyk, Natalia; Shi, Feng; Magnuski, Mikolaj; Skorko, Maciek; Dobrowolski, Pawel; Kossowski, Bartosz; Marchewka, Artur; Bielecki, Maksymilian; Kossut, Malgorzata; Brzezicka, Aneta

    2018-06-20

    Experienced video game players exhibit superior performance in visuospatial cognition when compared to non-players. However, very little is known about the relation between video game experience and structural brain plasticity. To address this issue, a direct comparison of the white matter brain structure in RTS (real time strategy) video game players (VGPs) and non-players (NVGPs) was performed. We hypothesized that RTS experience can enhance connectivity within and between occipital and parietal regions, as these regions are likely to be involved in the spatial and visual abilities that are trained while playing RTS games. The possible influence of long-term RTS game play experience on brain structural connections was investigated using diffusion tensor imaging (DTI) and a region of interest (ROI) approach in order to describe the experience-related plasticity of white matter. Our results revealed significantly more total white matter connections between occipital and parietal areas and within occipital areas in RTS players compared to NVGPs. Additionally, the RTS group had an altered topological organization of their structural network, expressed in local efficiency within the occipito-parietal subnetwork. Furthermore, the positive association between network metrics and time spent playing RTS games suggests a close relationship between extensive, long-term RTS game play and neuroplastic changes. These results indicate that long-term and extensive RTS game experience induces alterations along axons that link structures of the occipito-parietal loop involved in spatial and visual processing. © 2018 Wiley Periodicals, Inc.

  8. The architecture of a video image processor for the space station

    Science.gov (United States)

    Yalamanchili, S.; Lee, D.; Fritze, K.; Carpenter, T.; Hoyme, K.; Murray, N.

    1987-01-01

    The architecture of a video image processor for space station applications is described. The architecture was derived from a study of the requirements of algorithms that are necessary to produce the desired functionality of many of these applications. Architectural options were selected based on a simulation of the execution of these algorithms on various architectural organizations. A great deal of emphasis was placed on the ability of the system to evolve and grow over the lifetime of the space station. The result is a hierarchical parallel architecture that is characterized by high level language programmability, modularity, extensibility and can meet the required performance goals.

  9. 77 FR 40619 - Announcement of Requirements and Registration for What's In Your Health Record Video Challenge

    Science.gov (United States)

    2012-07-10

    ... benefit of being able to view what was in your record? iv. What did you, or your provider learn from... Information Technology, HHS. Award Approving Official: Lygeia Ricciardi, Director, Office of Consumer eHealth. ACTION: Notice. SUMMARY: The Office of the National Coordinator for Health Information Technology (ONC...

  10. Automatic lameness detection based on consecutive 3D-video recordings

    NARCIS (Netherlands)

    Hertem, van T.; Viazzi, S.; Steensels, M.; Maltz, E.; Antler, A.; Alchanatis, V.; Schlageter-Tello, A.; Lokhorst, C.; Romanini, C.E.B.; Bahr, C.; Berckmans, D.; Halachmi, I.

    2014-01-01

    Manual locomotion scoring for lameness detection is a time-consuming and subjective procedure. Therefore, the objective of this study is to optimise the classification output of a computer vision based algorithm for automated lameness scoring. Cow gait recordings were made during four consecutive

  11. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  12. Dynamics of Stability of Orientation Maps Recorded with Optical Imaging.

    Science.gov (United States)

    Shumikhina, S I; Bondar, I V; Svinov, M M

    2018-03-15

    Orientation selectivity is an important feature of visual cortical neurons. Optical imaging of the visual cortex allows for the generation of maps of orientation selectivity that reflect the activity of large populations of neurons. To estimate the statistical significance of effects of experimental manipulations, evaluation of the stability of cortical maps over time is required. Here, we performed optical imaging recordings of the visual cortex of anesthetized adult cats. Monocular stimulation with moving clockwise square-wave gratings that continuously changed orientation and direction was used as the mapping stimulus. Recordings were repeated at various time intervals, from 15 min to 16 h. Quantification of map stability was performed on a pixel-by-pixel basis using several techniques. Map reproducibility showed clear dynamics over time. The highest degree of stability was seen in maps recorded 15-45 min apart. Averaging across all time intervals and all stimulus orientations revealed a mean shift of 2.2 ± 0.1°. There was a significant tendency for larger shifts to occur at longer time intervals. Shifts between 2.8° (mean ± 2SD) and 5° were observed more frequently at oblique orientations, while shifts greater than 5° appeared more frequently at cardinal orientations. Shifts greater than 5° occurred rarely overall (5.4% of cases) and never exceeded 11°. Shifts of 10-10.6° (0.7%) were seen occasionally at time intervals of more than 4 h. Our findings should be considered when evaluating the potential effect of experimental manipulations on orientation selectivity mapping studies. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. An automated form of video image analysis applied to classification of movement disorders.

    Science.gov (United States)

    Chang, R; Guan, L; Burne, J A

    Video image analysis is able to provide quantitative data on postural and movement abnormalities and thus has an important application in neurological diagnosis and management. The conventional techniques require patients to be videotaped while wearing markers in a highly structured laboratory environment. This restricts the utility of video in routine clinical practise. We have begun development of intelligent software which aims to provide a more flexible system able to quantify human posture and movement directly from whole-body images without markers and in an unstructured environment. The steps involved are to extract complete human profiles from video frames, to fit skeletal frameworks to the profiles and derive joint angles and swing distances. By this means a given posture is reduced to a set of basic parameters that can provide input to a neural network classifier. To test the system's performance we videotaped patients with dopa-responsive Parkinsonism and age-matched normals during several gait cycles, to yield 61 patient and 49 normal postures. These postures were reduced to their basic parameters and fed to the neural network classifier in various combinations. The optimal parameter sets (consisting of both swing distances and joint angles) yielded successful classification of normals and patients with an accuracy above 90%. This result demonstrated the feasibility of the approach. The technique has the potential to guide clinicians on the relative sensitivity of specific postural/gait features in diagnosis. Future studies will aim to improve the robustness of the system in providing accurate parameter estimates from subjects wearing a range of clothing, and to further improve discrimination by incorporating more stages of the gait cycle into the analysis.

  14. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction.

    Science.gov (United States)

    Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom

    2018-06-01

    Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.

  15. Video clip transfer of radiological images using a mobile telephone in emergency neurosurgical consultations (3G Multi-Media Messaging Service).

    Science.gov (United States)

    Waran, Vicknes; Bahuri, Nor Faizal Ahmad; Narayanan, Vairavan; Ganesan, Dharmendra; Kadir, Khairul Azmi Abdul

    2012-04-01

    The purpose of this study was to validate and assess the accuracy and usefulness of sending short video clips in 3gp file format of an entire scan series of patients, using mobile telephones running on 3G-MMS technology, to enable consultation between junior doctors in a neurosurgical unit and the consultants on-call after office hours. A total of 56 consecutive patients with acute neurosurgical problems requiring urgent after-hours consultation during a 6-month period, prospectively had their images recorded and transmitted using the above method. The response to the diagnosis and the management plan by two neurosurgeons (who were not on site) based on the images viewed on a mobile telephone were reviewed by an independent observer and scored. In addition to this, a radiologist reviewed the original images directly on the hospital's Patients Archiving and Communication System (PACS) and this was compared with the neurosurgeons' response. Both neurosurgeons involved in this study were in complete agreement with their diagnosis. The radiologist disagreed with the diagnosis in only one patient, giving a kappa coefficient of 0.88, indicating an almost perfect agreement. The use of mobile telephones to transmit MPEG video clips of radiological images is very advantageous for carrying out emergency consultations in neurosurgery. The images accurately reflect the pathology in question, thereby reducing the incidence of medical errors from incorrect diagnosis, which otherwise may just depend on a verbal description.

  16. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    Science.gov (United States)

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  17. The use of digital imaging, video conferencing, and telepathology in histopathology: a national survey.

    Science.gov (United States)

    Dennis, T; Start, R D; Cross, S S

    2005-03-01

    To undertake a large scale survey of histopathologists in the UK to determine the current infrastructure, training, and attitudes to digital pathology. A postal questionnaire was sent to 500 consultant histopathologists randomly selected from the membership of the Royal College of Pathologists in the UK. There was a response rate of 47%. Sixty four per cent of respondents had a digital camera mounted on their microscope, but only 12% had any sort of telepathology equipment. Thirty per cent used digital images in electronic presentations at meetings at least once a year and only 24% had ever used telepathology in a diagnostic situation. Fifty nine per cent had received no training in digital imaging. Fifty eight per cent felt that the medicolegal implications of duty of care were a barrier to its use. A large proportion of pathologists (69%) were interested in using video conferencing for remote attendance at multidisciplinary team meetings. There is a reasonable level of equipment and communications infrastructure among histopathologists in the UK but a very low level of training. There is resistance to the use of telepathology in the diagnostic context but enthusiasm for the use of video conferencing in multidisciplinary team meetings.

  18. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  19. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    Science.gov (United States)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  20. VISDTA: A video imaging system for detection, tracking, and assessment: Prototype development and concept demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Pritchard, D.A.

    1987-05-01

    It has been demonstrated that thermal imagers are an effective surveillance and assessment tool for security applications because: (1) they work day or night due to their sensitivity to thermal signatures; (2) penetrability through fog, rain, dust, etc., is better than human eyes; (3) short or long range operation is possible with various optics; and (4) they are strictly passive devices providing visible imagery which is readily interpreted by the operator with little training. Unfortunately, most thermal imagers also require the setup of a tripod, connection of batteries, cables, display, etc. When this is accomplished, the operator must manually move the camera back and forth searching for signs of aggressor activity. VISDTA is designed to provide automatic panning, and in a sense, ''watch'' the imagery in place of the operator. The idea behind the development of VISDTA is to provide a small, portable, rugged system to automatically scan areas and detect targets by computer processing of images. It would use a thermal imager and possibly an intensified day/night TV camera, a pan/ tilt mount, and a computer for system control. If mounted on a dedicated vehicle or on a tower, VISDTA will perform video motion detection functions on incoming video imagery, and automatically scan predefined patterns in search of abnormal conditions which may indicate attempted intrusions into the field-of-regard. In that respect, VISDTA is capable of improving the ability of security forces to maintain security of a given area of interest by augmenting present techniques and reducing operator fatigue.

  1. Method for operating video game with back-feeding a video image of a player, and a video game arranged for practicing the method.

    NARCIS (Netherlands)

    2006-01-01

    In a video gaming environment, a player is enabled to interact with the environment. Further, a score and/or performance of the player in a particular session is machine detected and fed fed back into the gaming environment and a representation of said score and/or performance is displayed in visual

  2. Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time

    International Nuclear Information System (INIS)

    Jarvis, Lesley A.; Zhang, Rongxiao; Gladstone, David J.; Jiang, Shudong; Hitchcock, Whitney; Friedman, Oscar D.; Glaser, Adam K.; Jermyn, Michael; Pogue, Brian W.

    2014-01-01

    Purpose: To determine whether Cherenkov light imaging can visualize radiation therapy in real time during breast radiation therapy. Methods and Materials: An intensified charge-coupled device (CCD) camera was synchronized to the 3.25-μs radiation pulses of the clinical linear accelerator with the intensifier set × 100. Cherenkov images were acquired continuously (2.8 frames/s) during fractionated whole breast irradiation with each frame an accumulation of 100 radiation pulses (approximately 5 monitor units). Results: The first patient images ever created are used to illustrate that Cherenkov emission can be visualized as a video during conditions typical for breast radiation therapy, even with complex treatment plans, mixed energies, and modulated treatment fields. Images were generated correlating to the superficial dose received by the patient and potentially the location of the resulting skin reactions. Major blood vessels are visible in the image, providing the potential to use these as biological landmarks for improved geometric accuracy. The potential for this system to detect radiation therapy misadministrations, which can result from hardware malfunction or patient positioning setup errors during individual fractions, is shown. Conclusions: Cherenkoscopy is a unique method for visualizing surface dose resulting in real-time quality control. We propose that this system could detect radiation therapy errors in everyday clinical practice at a time when these errors can be corrected to result in improved safety and quality of radiation therapy

  3. Realization of a video-rate distributed aperture millimeter-wave imaging system using optical upconversion

    Science.gov (United States)

    Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis

    2013-05-01

    Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.

  4. Elements and process for recording direct image neutron radiographs

    International Nuclear Information System (INIS)

    Poignant, R.V. Jr.; Przybylowicz, E.P.

    1975-01-01

    An element is provided for recording a direct image neutron radiograph, thus eliminating the need for a transfer step (i.e., the use of a transfer screen). The element is capable of holding an electrostatic charge and comprises a first layer for absorbing neutrons and generating a current by dissipation of said electrostatic charge in proportion to the number of neutrons absorbed, and a second layer for conducting the current generated by the absorbed neutrons, said neutron absorbing layer comprising an insulative layer comprising neutron absorbing agents in a concentration of at least 10 17 atoms per cm 3 . An element for enhancing the effect of the neutron beam by utilizing the secondary emanations of neutron absorbing materials is also disclosed along with a process for using the device. (U.S.)

  5. Interaction between High-Level and Low-Level Image Analysis for Semantic Video Object Extraction

    Directory of Open Access Journals (Sweden)

    Andrea Cavallaro

    2004-06-01

    Full Text Available The task of extracting a semantic video object is split into two subproblems, namely, object segmentation and region segmentation. Object segmentation relies on a priori assumptions, whereas region segmentation is data-driven and can be solved in an automatic manner. These two subproblems are not mutually independent, and they can benefit from interactions with each other. In this paper, a framework for such interaction is formulated. This representation scheme based on region segmentation and semantic segmentation is compatible with the view that image analysis and scene understanding problems can be decomposed into low-level and high-level tasks. Low-level tasks pertain to region-oriented processing, whereas the high-level tasks are closely related to object-level processing. This approach emulates the human visual system: what one “sees” in a scene depends on the scene itself (region segmentation as well as on the cognitive task (semantic segmentation at hand. The higher-level segmentation results in a partition corresponding to semantic video objects. Semantic video objects do not usually have invariant physical properties and the definition depends on the application. Hence, the definition incorporates complex domain-specific knowledge and is not easy to generalize. For the specific implementation used in this paper, motion is used as a clue to semantic information. In this framework, an automatic algorithm is presented for computing the semantic partition based on color change detection. The change detection strategy is designed to be immune to the sensor noise and local illumination variations. The lower-level segmentation identifies the partition corresponding to perceptually uniform regions. These regions are derived by clustering in an N-dimensional feature space, composed of static as well as dynamic image attributes. We propose an interaction mechanism between the semantic and the region partitions which allows to

  6. A software oscilloscope for DOS computers with an integrated remote control for a video tape recorder. The assignment of acoustic events to behavioural observations.

    Science.gov (United States)

    Höller, P

    1995-12-01

    With only a little knowledge of programming IBM compatible computers in Basic, it is possible to create a digital software oscilloscope with sampling rates up to 17 kHz (depending on the CPU- and bus-speed). The only additional hardware requirement is a common sound card compatible with the Soundblaster. The system presented in this paper is built to analyse the direction a flying bat is facing during sound emission. For this reason the system works with some additional hardware devices, in order to monitor video sequences at the computer screen, overlaid by an online oscillogram. Using an RS232-interface for a Panasonic video tape recorder both the oscillogram and the video tape recorder can be controlled simultaneously and moreover be analysed frame by frame. Not only acoustical events, but also APs, myograms, EEGs and other physiological data can be digitized and analysed in combination with the behavioural data of an experimental subject.

  7. Image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  8. Relating pressure measurements to phenomena observed in high speed video recordings during tests of explosive charges in a semi-confined blast chamber

    CSIR Research Space (South Africa)

    Mostert, FJ

    2012-09-01

    Full Text Available initiation of the charge. It was observed in the video recordings that the detonation product cloud exhibited pulsating behaviour due to the reflected shocks in the chamber analogous to the behaviour of the gas bubble in underwater explosions. This behaviour...

  9. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    Science.gov (United States)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  10. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  11. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  12. Image and video based remote target localization and tracking on smartphones

    Science.gov (United States)

    Wang, Qia; Lobzhanidze, Alex; Jang, Hyun; Zeng, Wenjun; Shang, Yi; Yang, Jingyu

    2012-06-01

    Smartphones are becoming popular nowadays not only because of its communication functionality but also, more importantly, its powerful sensing and computing capability. In this paper, we describe a novel and accurate image and video based remote target localization and tracking system using the Android smartphones, by leveraging its built-in sensors such as camera, digital compass, GPS, etc. Even though many other distance estimation or localization devices are available, our all-in-one, easy-to-use localization and tracking system on low cost and commodity smartphones is first of its kind. Furthermore, smartphones' exclusive user-friendly interface has been effectively taken advantage of by our system to facilitate low complexity and high accuracy. Our experimental results show that our system works accurately and efficiently.

  13. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    Science.gov (United States)

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  14. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    Directory of Open Access Journals (Sweden)

    Michal Kedzierski

    2016-06-01

    Full Text Available The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs, especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° ( φ or ω and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  15. Slit-lamp management in contact lenses laboratory classes: learning upgrade with monitor visualization of webcam video recordings

    Science.gov (United States)

    Arines, Justo; Gargallo, Ana

    2014-07-01

    The training in the use of the slit lamp has always been difficult for students of the degree in Optics and Optometry. Instruments with associated cameras helps a lot in this task, they allow teachers to observe and control if the students evaluate the eye health appropriately, correct use errors and show them how to do it with a visual demonstration. However, these devices are more expensive than those that do not have an integrated camera connected to a display unit. With the aim to improve students' skills in the management of slit lamp, we have adapted USB HD webcams (Microsoft Lifecam HD-5000) to the objectives of the slit lamps available in our contact lenses laboratory room. The webcams are connected to a PC running Linux Ubuntu 11.0; therefore that is a low-cost device. Our experience shows that single method has several advantages. It allows us to take pictures with a good quality of different conditions of the eye health; we can record videos of eye evaluation and make demonstrations of the instrument. Besides it increases the interactions between students because they could see what their colleagues are doing and take conscious of the mistakes, helping and correcting each others. It is a useful tool in the practical exam too. We think that the method supports the training in optometry practice and increase the students' confidence without a huge outlay.

  16. 'Too much, too late': mixed methods multi-channel video recording study of computerized decision support systems and GP prescribing.

    Science.gov (United States)

    Hayward, James; Thomson, Fionagh; Milne, Heather; Buckingham, Susan; Sheikh, Aziz; Fernando, Bernard; Cresswell, Kathrin; Williams, Robin; Pinnock, Hilary

    2013-06-01

    Computerized decision support systems (CDSS) are commonly deployed to support prescribing, although over-riding of alerts by prescribers remains a concern. We aimed to understand how general practitioners (GPs) interact with prescribing CDSS in order to inform deliberation on how better to support prescribing decisions in primary care. Quantitative and qualitative analysis of interactions between GPs, patients, and computer systems using multi-channel video recordings of 112 primary care consultations with eight GPs in three UK practices. 132 prescriptions were issued in the course of 73 of the consultations, of which 81 (61%) attracted at least one alert. Of the total of 117 alerts, only three resulted in the GP checking, but not altering, the prescription. CDSS provided information and safety alerts at the point of generating a prescription. This was 'too much, too late' as the majority of the 'work' of prescribing occurred prior to using the computer. By the time an alert appeared, the GP had formulated the problem(s), potentially spent several minutes considering, explaining, negotiating, and reaching agreement with the patient about the proposed treatment, and had possibly given instructions and printed an information leaflet. CDSS alerts do not coincide with the prescribing workflow throughout the whole GP consultation. Current systems interrupt to correct decisions that have already been taken, rather than assisting formulation of the management plan. CDSS are likely to be more acceptable and effective if the prescribing support is provided much earlier in the process of generating a prescription.

  17. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution – an application in higher education

    NARCIS (Netherlands)

    Jan Kuijten; Ajda Ortac; Hans Maier; Gert de Heer

    2015-01-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels).

  18. Comparison of mating performance of medfly (Diptera: Tephritidae) genetic sexing and wild type strains: field cage and video recording experiments

    International Nuclear Information System (INIS)

    Calcagno, G.E.; Vilardi, J.C.; Manso, F.

    2002-01-01

    To improve the efficiency of the sterile insect technique (SIT) efforts are being devoted to obtain genetic sexing strains (GSS). The present work was carried out in order to compare the mating efficiency of flies from the GSS [(Ty34228 y + /X)sw x ] and from a wild type strain (Mendoza). Females of the GSS (T228) exhibit longer embryonic development, while males develop in a normal time period. In a field-cage experiment, mating competitiveness was compared between the T228 and the Mendoza, Argentina mass reared strain. The number and duration of matings and the location of copula in the tree were recorded. The analysis was repeated using irradiated males of T228. The results showed that mating efficiency of the GSS is good in comparison with that of the Mendoza strain. Although copulatory success in T228 is reduced by the radiation treatment, the high numbers of sterilized males released would compensate this effect in the control programs. In a second experiment, under laboratory conditions, video recording techniques were applied. In this case two virgin males, one of the GSS and one emerged from wild collected fruits, competed during 30 min for a virgin wild female. The proportion of successful males did not differ between strains, but some differences were observed between strains in the time spent in different stages of the courtship. Males of the T228 were more aggressive, and they attempted to copulate with the other male more frequently than did wild males. These differences may be due to selection for more aggressive individuals under the overcrowded laboratory breeding conditions for this strain. (author)

  19. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  20. The MIVS [Modular Integrated Video System] Image Processing System (MIPS) for assisting in the optical surveillance data review process

    International Nuclear Information System (INIS)

    Horton, R.D.

    1990-01-01

    The MIVS (Modular Integrated Video System) Image Processing System (MIPS) is designed to review MIVS surveillance data automatically and identify IAEA defined objects of safeguards interest. To achieve this, MIPS uses both digital image processing and neural network techniques to detect objects of safeguards interest in an image and assist an inspector in the review of the MIVS video tapes. MIPS must be ''trained'' i.e., given example images showing the objects that it must recognize, for each different facility. Image processing techniques are used to first identify significantly changed areas of the image. A neural network is then used to determine if the image contains the important object(s). The MIPS algorithms have demonstrated the capability to detect when a spent fuel shipping cask is present in an image after MIPS is properly trained to detect the cask. The algorithms have also demonstrated the ability to reject uninteresting background activities such as people and crane movement. When MIPS detects an important object, the corresponding image is stored to another media and later replayed for the inspector to review. The MIPS algorithms are being implemented in commercially available hardware: an image processing subsystem and an 80386 Personal Computer. MIPS will have a high-level easy-to-use system interface to allow inspectors to train MIPS on MIVS data from different facilities and on various safeguards significant objects. This paper describes the MIPS algorithms, hardware implementation, and system configuration. 3 refs., 10 figs

  1. Video imaging measurement of interfacial wave velocity in air-water flow through a horizontal elbow

    Science.gov (United States)

    Al-Wazzan, Amir; Than, Cheok F.; Moghavvemi, Mahmoud; Yew, Chia W.

    2001-10-01

    Two-phase flow in pipelines containing elbows represents a common situation in the oil and gas industries. This study deals with the stratified flow regime between the gas and liquid phase through an elbow. It is of interest to study the change in wave characteristics by measuring the wave velocity and wavelength at the inlet and outlet of the elbow. The experiments were performed under concurrent air-water stratified flow in a horizontal transparent polycarbonate pipe of 0.05m diameter and superficial air and water velocities up to 8.97 and 0.0778 m/s respectively. A non-intrusive video imaging technique was applied to capture the waves. For image analysis, a frame by frame direct overlapping method was used to detect for pulsating flow and a pixel shifting method based on the detection of minimum values in the overlap function was used to determine wave velocity and wavelength. Under superficial gas velocity of less than 4.44 m/s, the results suggest a regular pulsating outflow produced by the elbow. At higher gas velocities, more random pulsation was found and the emergence of localized interfacial waves was detected. Wave velocities measured by this technique were found to produce satisfactory agreement with direct measurements.

  2. Thermal image analysis of plastic deformation and fracture behavior by a thermo-video measurement system

    International Nuclear Information System (INIS)

    Ohbuchi, Yoshifumi; Sakamoto, Hidetoshi; Nagatomo, Nobuaki

    2016-01-01

    The visualization of the plastic region and the measurement of its size are necessary and indispensable to evaluate the deformation and fracture behavior of a material. In order to evaluate the plastic deformation and fracture behavior in a structural member with some flaws, the authors paid attention to the surface temperature which is generated by plastic strain energy. The visualization of the plastic deformation was developed by analyzing the relationship between the extension of the plastic deformation range and the surface temperature distribution, which was obtained by an infrared thermo-video system. Furthermore, FEM elasto-plastic analysis was carried out with the experiment, and the effectiveness of this non-contact measurement system of the plastic deformation and fracture process by a thermography system was discussed. The evaluation method using an infrared imaging device proposed in this research has a feature which does not exist in the current evaluation method, i.e. the heat distribution on the surface of the material has been measured widely by noncontact at 2D at high speed. The new measuring technique proposed here can measure the macroscopic plastic deformation distribution on the material surface widely and precisely as a 2D image, and at high speed, by calculation from the heat generation and the heat propagation distribution. (paper)

  3. Color, Scale, and Rotation Independent Multiple License Plates Detection in Videos and Still Images

    Directory of Open Access Journals (Sweden)

    Narasimha Reddy Soora

    2016-01-01

    Full Text Available Most of the existing license plate (LP detection systems have shown significant development in the processing of the images, with restrictions related to environmental conditions and plate variations. With increased mobility and internationalization, there is a need to develop a universal LP detection system, which can handle multiple LPs of many countries and any vehicle, in an open environment and all weather conditions, having different plate variations. This paper presents a novel LP detection method using different clustering techniques based on geometrical properties of the LP characters and proposed a new character extraction method, for noisy/missed character components of the LP due to the presence of noise between LP characters and LP border. The proposed method detects multiple LPs from an input image or video, having different plate variations, under different environmental and weather conditions because of the geometrical properties of the set of characters in the LP. The proposed method is tested using standard media-lab and Application Oriented License Plate (AOLP benchmark LP recognition databases and achieved the success rates of 97.3% and 93.7%, respectively. Results clearly indicate that the proposed approach is comparable to the previously published papers, which evaluated their performance on publicly available benchmark LP databases.

  4. Remote visual inspection of nuclear fuel pellets with fiber optics and video image processing

    International Nuclear Information System (INIS)

    Moore, F.W.

    1985-01-01

    Westinghouse Hanford Company has designed and is constructing a nuclear fuel fabrication process line for the Department of Energy. This process line includes a pellet surface inspection system that remotely inspects the cylindrical surface of nuclear fuel pellets for surface spots, flaws, or discoloration. The pellets are inspected on a 100% basis after pellet sintering. A feeder will deliver the pellets directly to fiber optic inspection head. The inspection head will view one pellet surface at a time. The surface image of the pellet will be imaged to a closed-circuit color television camera (CCTV). The output signal of the CCTV will be input to a digital imaging processor that stores approximately 25 pellet images at a time. A human operator will visually examine the images of the pellet surfaces on a high resolution monitor and accept or reject the pellets based on visual standards. The operator will use a digitizing tablet to record the location of rejected pellets, which will then be automatically removed from the product stream. The system is expandable to automated disposition of the pellet surface image

  5. Remote visual inspection of nuclear fuel pellets with fiber optics and video image processing

    International Nuclear Information System (INIS)

    Moore, F.W.

    1986-01-01

    Westinghouse Hanford Company has designed and is constructing a nuclear fuel fabrication process line for the Department of Energy. This process line includes a pellet surface inspection system that remotely inspects the cylindrical surface of nuclear fuel pellets for surface spots, flaws, or discoloration. The pellets are inspected on a 100 percent basis after pellet sintering. A feeder will deliver the pellets directly to a fiber optic inspection head. The inspection head will view one pellet surface at a time. The surface image of the pellet will be imaged to a closed-circuit color television camera (CCTV). The output signal of the CCTV will be input to a digital imaging processor that stores approximately 25 pellet images at a time. A human operator will visually examine the images of the pellet surfaces on a high resolution monitor and accept or reject the pellets based on visual standards. The operator will use a digitizing tablet to record the location of rejected pellets, which will then be automatically removed from the product stream. The system is expandable to automated disposition of the pellet surface image

  6. Nursing students' self-evaluation using a video recording of foley catheterization: effects on students' competence, communication skills, and learning motivation.

    Science.gov (United States)

    Yoo, Moon Sook; Yoo, Il Young; Lee, Hyejung

    2010-07-01

    An opportunity for a student to evaluate his or her own performance enhances self-awareness and promotes self-directed learning. Using three outcome measures of competency of procedure, communication skills, and learning motivation, the effects of self-evaluation using a video recording of the student's Foley catheterization was investigated in this study. The students in the experimental group (n = 20) evaluated their Foley catheterization performance by reviewing the video recordings of their own performance, whereas students in the control group (n = 20) received written evaluation guidelines only. The results showed that the students in the experimental group had better scores on competency (p communication skills (p performance developed by reviewing a videotape appears to increase the competency of clinical skills in nursing students. Copyright 2010, SLACK Incorporated.

  7. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  8. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    Science.gov (United States)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  9. First high speed imaging of lightning from summer thunderstorms over India: Preliminary results based on amateur recording using a digital camera

    Science.gov (United States)

    Narayanan, V. L.

    2017-12-01

    For the first time, high speed imaging of lightning from few isolated tropical thunderstorms are observed from India. The recordings are made from Tirupati (13.6oN, 79.4oE, 180 m above mean sea level) during summer months with a digital camera capable of recording high speed videos up to 480 fps. At 480 fps, each individual video file is recorded for 30 s resulting in 14400 deinterlaced images per video file. An automatic processing algorithm is developed for quick identification and analysis of the lightning events which will be discussed in detail. Preliminary results indicating different types of phenomena associated with lightning like stepped leader, dart leader, luminous channels corresponding to continuing current and M components are discussed. While most of the examples show cloud to ground discharges, few interesting cases of intra-cloud, inter-cloud and cloud-air discharges will also be displayed. This indicates that though high speed cameras with few 1000 fps are preferred for a detailed study on lightning, moderate range CMOS sensor based digital cameras can provide important information as well. The lightning imaging activity presented herein is initiated as an amateur effort and currently plans are underway to propose a suite of supporting instruments to conduct coordinated campaigns. The images discussed here are acquired from normal residential area and indicate how frequent lightning strikes are in such tropical locations during thunderstorms, though no towering structures are nearby. It is expected that popularizing of such recordings made with affordable digital cameras will trigger more interest in lightning research and provide a possible data source from amateur observers paving the way for citizen science.

  10. Remote visual inspection of nuclear fuel pellets with fiber optics and video image processing

    International Nuclear Information System (INIS)

    Moore, F.W.

    1987-01-01

    Westinghouse Hanford Company has designed and constructed a nuclear fuel fabrication process line for the U.S. Department of Energy. This process line includes a system that remotely inspects the cylindrical surface of nuclear fuel pellets for surface spots, flaws, or discoloration. The pellets are inspected on a 100% basis after pellet sintering. A feeder delivers the pellets directly to a fiber optic inspection head, which views one pellet surface at a time and images it to a closed-circuit color television camera (CCTV). The output signal of the CCTV is input to a digital imaging processor that stores approximately 25 pellet images at a time. A human operator visually examines the images of the pellet surfaces on a high resolution monitor and accepts or rejects the pellets based on visual standards. The operator uses a digitizing tablet to record the location of rejected pellets, which are then automatically removed from the product stream. The system is expandable to automated disposition of the pellet surface image

  11. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  12. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  13. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Science.gov (United States)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  14. Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.

    Science.gov (United States)

    Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N

    2017-05-01

    This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.

  15. Comparative study of digital laser film and analog paper image recordings

    International Nuclear Information System (INIS)

    Lee, K.R.; Cox, G.G.; Templeton, A.W.; Preston, D.F.; Anderson, W.H.; Hensley, K.S.; Dwyer, S.J.

    1987-01-01

    The increase in the use of various imaging modalities demands higher quality and more efficacious analog image recordings. Laser electronic recordings with digital array prints of 4,000 x 5,000 x 12 bits obtained using laser-sensitive film or paper are being evaluated. Dry silver paper recordings are being improved and evaluated. High-resolution paper dot printers are being studied to determine their gray-scale capabilities. The authors evaluated the image quality, costs, clinical utilization, and acceptability of CT scans, MR images, digital subtraction angiograms, digital radiographs, and radionuclide scans recorded by seven different printers (three laser, three silver paper, and one dot) and compared the same features in conventional film recording. This exhibit outlines the technical developments and instrumentation of digital laser film and analog paper recorders and presents the results of the study

  16. EXTRACTION OF BENTHIC COVER INFORMATION FROM VIDEO TOWS AND PHOTOGRAPHS USING OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. T. L. Estomata

    2012-07-01

    Full Text Available Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU, which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA, which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05.

  17. Prediction of foal carcass composition and wholesale cut yields by using video image analysis.

    Science.gov (United States)

    Lorenzo, J M; Guedes, C M; Agregán, R; Sarriés, M V; Franco, D; Silva, S R

    2018-01-01

    This work represents the first contribution for the application of the video image analysis (VIA) technology in predicting lean meat and fat composition in the equine species. Images of left sides of the carcass (n=42) were captured from the dorsal, lateral and medial views using a high-resolution digital camera. A total of 41 measurements (angles, lengths, widths and areas) were obtained by VIA. The variation of percentage of lean meat obtained from the forequarter (FQ) and hindquarter (HQ) carcass ranged between 5.86% and 7.83%. However, the percentage of fat (FAT) obtained from the FQ and HQ carcass presented a higher variation (CV between 41.34% and 44.58%). By combining different measurements and using prediction models with cold carcass weight (CCW) and VIA measurement the coefficient of determination (k-fold-R 2) were 0.458 and 0.532 for FQ and HQ, respectively. On the other hand, employing the most comprehensive model (CCW plus all VIA measurements), the k-fold-R 2 increased from 0.494 to 0.887 and 0.513 to 0.878 with respect to the simplest model (only with CCW), while precision increased with the reduction in the root mean square error (2.958 to 0.947 and 1.841 to 0.787) for the hindquarter fat and lean percentage, respectively. With CCW plus VIA measurements is possible to explain the wholesale value cuts yield variation (k-fold-R 2 between 0.533 and 0.889). Overall, the VIA technology performed in the present study could be considered as an accurate method to assess the horse carcass composition which could have a role in breeding programmes and research studies to assist in the development of a value-based marketing system for horse carcass.

  18. Integrated homeland security system with passive thermal imaging and advanced video analytics

    Science.gov (United States)

    Francisco, Glen; Tillman, Jennifer; Hanna, Keith; Heubusch, Jeff; Ayers, Robert

    2007-04-01

    A complete detection, management, and control security system is absolutely essential to preempting criminal and terrorist assaults on key assets and critical infrastructure. According to Tom Ridge, former Secretary of the US Department of Homeland Security, "Voluntary efforts alone are not sufficient to provide the level of assurance Americans deserve and they must take steps to improve security." Further, it is expected that Congress will mandate private sector investment of over $20 billion in infrastructure protection between 2007 and 2015, which is incremental to funds currently being allocated to key sites by the department of Homeland Security. Nearly 500,000 individual sites have been identified by the US Department of Homeland Security as critical infrastructure sites that would suffer severe and extensive damage if a security breach should occur. In fact, one major breach in any of 7,000 critical infrastructure facilities threatens more than 10,000 people. And one major breach in any of 123 facilities-identified as "most critical" among the 500,000-threatens more than 1,000,000 people. Current visible, nightvision or near infrared imaging technology alone has limited foul-weather viewing capability, poor nighttime performance, and limited nighttime range. And many systems today yield excessive false alarms, are managed by fatigued operators, are unable to manage the voluminous data captured, or lack the ability to pinpoint where an intrusion occurred. In our 2006 paper, "Critical Infrastructure Security Confidence Through Automated Thermal Imaging", we showed how a highly effective security solution can be developed by integrating what are now available "next-generation technologies" which include: Thermal imaging for the highly effective detection of intruders in the dark of night and in challenging weather conditions at the sensor imaging level - we refer to this as the passive thermal sensor level detection building block Automated software detection

  19. Video encoder/decoder for encoding/decoding motion compensated images

    NARCIS (Netherlands)

    1996-01-01

    Video encoder and decoder, provided with a motion compensator for motion-compensated video coding or decoding in which a picture is coded or decoded in blocks in alternately horizontal and vertical steps. The motion compensator is provided with addressing means (160) and controlled multiplexers

  20. A review of techniques for the identification and measurement of fish in underwater stereo-video image sequences

    Science.gov (United States)

    Shortis, Mark R.; Ravanbakskh, Mehdi; Shaifat, Faisal; Harvey, Euan S.; Mian, Ajmal; Seager, James W.; Culverhouse, Philip F.; Cline, Danelle E.; Edgington, Duane R.

    2013-04-01

    Underwater stereo-video measurement systems are used widely for counting and measuring fish in aquaculture, fisheries and conservation management. To determine population counts, spatial or temporal frequencies, and age or weight distributions, snout to fork length measurements are captured from the video sequences, most commonly using a point and click process by a human operator. Current research aims to automate the measurement and counting task in order to improve the efficiency of the process and expand the use of stereo-video systems within marine science. A fully automated process will require the detection and identification of candidates for measurement, followed by the snout to fork length measurement, as well as the counting and tracking of fish. This paper presents a review of the techniques used for the detection, identification, measurement, counting and tracking of fish in underwater stereo-video image sequences, including consideration of the changing body shape. The review will analyse the most commonly used approaches, leading to an evaluation of the techniques most likely to be a general solution to the complete process of detection, identification, measurement, counting and tracking.

  1. Observing Observers: Using Video to Prompt and Record Reflections on Teachers' Pedagogies in Four Regions of Canada

    Science.gov (United States)

    Reid, David A; Simmt, Elaine; Savard, Annie; Suurtamm, Christine; Manuel, Dominic; Lin, Terry Wan Jung; Quigley, Brenna; Knipping, Christine

    2015-01-01

    Regional differences in performance in mathematics across Canada prompted us to conduct a comparative study of middle-school mathematics pedagogy in four regions. We built on the work of Tobin, using a theoretical framework derived from the work of Maturana. In this paper, we describe the use of video as part of the methodology used. We used…

  2. The Effect of Theme Preference on Academic Word List Use: A Case for Smartphone Video Recording Feature

    Science.gov (United States)

    Gromik, Nicolas A.

    2017-01-01

    Sixty-seven Japanese English as a Second Language undergraduate learners completed one smartphone video production per week for 12 weeks, based on a teacher-selected theme. Designed as a case study for this specific context, data from students' oral performances was analyzed on a weekly basis for their use of the Academic Word List (AWL). A…

  3. Algorithm for Video Summarization of Bronchoscopy Procedures

    Directory of Open Access Journals (Sweden)

    Leszczuk Mikołaj I

    2011-12-01

    Full Text Available Abstract Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions

  4. Effects of hyperthermia on intracellular CA/sup 2+/ monitored by digitized video image fluorescence microscopy

    International Nuclear Information System (INIS)

    Asher, C.R.; Mikkelsen, R.B.

    1987-01-01

    With digitized video image fluorescence microscopy and the fluorescent Ca/sup 2+/ dye, fuca-2, the authors examined heat effects on intracellular free Ca/sup 2+/, [Ca/sup 2/]/sub f/. HT-29 human colon cancer cells grown on coverslip were equilibrated with 2.0 μM fura-2 in RPMI 1540 (20 0 , 15 min), washed three times and incubated at 20 0 for 1 h. Coverslips were mounted in a Dvorok perfusion chamber sitting within a temperature controlled microscope stage. Fluorescence was monitored at 500 nm by epi-illumination at 385 nm, excitation maximum for free dye, and 340 nm, maximum for Ca/sup 2+/ complexed dye, with a computer controlled filter wheel. The emission intensity ratio, I/sub 340//I/sub 385/, which corrects for dye leakage, photo-bleaching and cell thickness was used to calculate [Ca/sup 2+/]/sub f/. Measurements of 200 cells at 37 0 using a bit pad and mouse to select 0.6 x 0.6 μ cytoplasmi areas indicated 3 populations of cells in terms of [Ca/sup 2+/]/sub f/ (70%, 40-60nM; 15% 70-110nM; 15%, 120-200 nM). Heating to 43 0 for 1 h resulted in an overall decrease in [Ca/sup 2+/]/sub f/ with greater than 90% cells within 30-50 nM. Not all cells responded to heat. Post-incubation for 3 h at 37 0 showed the identical cell distribution; at 24 h, cell distribution was that of non-heated cells. The relationship of these results to cell killing and thermotolerance are not understood, but these results indicated the importance of cell heterogeneity in response to heat

  5. [Microcytomorphometric video-image detection of nuclear chromatin in ovarian cancer].

    Science.gov (United States)

    Grzonka, Dariusz; Kamiński, Kazimierz; Kaźmierczak, Wojciech

    2003-09-01

    Technology of detection of tissue preparates precisious evaluates contents of nuclear chromatine, largeness and shape of cellular nucleus, indicators of mitosis, DNA index, ploidy, phase-S fraction and other parameters. Methods of detection of picture are: microcytomorphometry video-image (MCMM-VI), flow, double flow and activated by fluorescence. Diagnostic methods of malignant neoplasm of ovary are still nonspecific and not precise, that is a reason of unsatisfied results of treatment. Evaluation of microcytomorphometric measurements of nuclear chromatine histopathologic tissue preparates (HP) of ovarian cancer and comparison to normal ovarian tissue. Estimated 10 paraffin embedded tissue preparates of serous ovarian cancer, 4 preparates mucinous cancer and 2 cases of tumor Kruckenberg patients operated in Clinic of Perinatology and Gynaecology Silesian Medical Academy in Zabrze in period 2001-2002, MCMM-VI estimation based on computer aided analysis system: microscope Axioscop 20, camera tv JVCTK-C 1380, CarlZeiss KS Vision 400 rel.3.0 software. Following MCMM-VI parameters assessed: count of pathologic nucleus, diameter of nucleus, area, min/max diameter ratio, equivalent circle diameter (Dcircle), mean of brightness (mean D), integrated optical density (IOD = area x mean D), DNA index and 2.5 c exceeding rate percentage (2.5 c ER%). MCMM-VI performed on the 160 areas of 16 preparates of cancer and 100 areas of normal ovarian tissue. Statistical analysis was performed by used t-Student test. We obtained stastistically significant higher values parameters of nuclear chromatine, DI, 2.5 c ER of mucinous cancer and tumor Kruckenberg comparison to serous cancer. MCMM-VI parameters of chromatine malignant ovarian neoplasm were statistically significantly higher than normal ovarian tissue. Cytometric and karyometric parametres of nuclear chromatine estimated MCMM-VI are useful in the diagnostics and prognosis of ovarian cancer.

  6. A no-reference image and video visual quality metric based on machine learning

    Science.gov (United States)

    Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy

    2018-04-01

    The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.

  7. Estimating age ratios and size of pacific walrus herds on coastal haulouts using video imaging.

    Directory of Open Access Journals (Sweden)

    Daniel H Monson

    Full Text Available During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance to classify the sex and age of walruses hauled out on Alaska beaches in 2010-2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m(2 (std. err. = 0.02, herd size ranged from 8,300 to 19,400 (CV 0.03-0.06 and we documented ∼30,000 animals along ∼1 km of beach in 2011. Within the herds, dependent walruses (0-2 yr-olds tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.

  8. New operator's console recorder

    International Nuclear Information System (INIS)

    Anon.

    2009-01-01

    This article described a software module that automatically records images being shown on multiple HMI or SCADA operator's displays. Videos used for monitoring activities at industrial plants can be combined with the operator console videos and data from a process historian. This enables engineers, analysts or investigators to see what is occurring in the plant, what the operator is seeing on the HMI screen, and all relevant real-time data from an event. In the case of a leak at a pumping station, investigators could watch plant video taken at a remote site showing fuel oil creeping across the floor, real-time data being acquired from pumps, valves and the receiving tank while the leak is occurring. The video shows the operator's HMI screen as well as the alarm screen that signifies the leak detection. The Longwatch Operator's Console Recorder and Video Historian are used together to acquire data about actual plant plant management because they show everything that happens during an event. The Console Recorder automatically retrieves and replays operator displays by clicking on a time-based alarm or system message. Play back of video feed is a valuable tool for training and analysis purposes, and can help mitigate insurance and regulatory issues by eliminating uncertainty and conjecture. 1 fig.

  9. First year midwifery students' experience with self-recorded and assessed video of selected midwifery practice skills at Otago Polytechnic in New Zealand.

    Science.gov (United States)

    McIntosh, Carolyn; Patterson, Jean; Miller, Suzanne

    2018-01-01

    Studying undergraduate midwifery at a distance has advantages in terms of accessibility and community support but presents challenges for practice based competence assessment. Student -recorded videos provide opportunities for completing the assigned skills, self-reflection, and assessment by a lecturer. This research asked how midwifery students experienced the process of completing the Video Assessment of Midwifery Practice Skills (VAMPS) in 2014 and 2015. The aim of the survey was to identify the benefits and challenges of the VAMPS assessment and to identify opportunities for improvement from the students' perspective. All students who had participated in the VAMPS assessment during 2014 and 2015 were invited to complete an online survey. To maintain confidentiality for the students, the Qualtrics survey was administered and the data downloaded by the Organisational Research Officer. Ethical approval was granted by the organisational ethics committee. Descriptive statistics were generated and students' comments were collated. The VAMPS provided an accessible option for the competence assessment and the opportunity for self-reflection and re-recording to perfect their skill which the students appreciated. The main challenges related to the technical aspects of recording and uploading the assessment. This study highlighted some of the benefits and challenges experienced by the midwifery students and showed that practice skills can be successfully assessed at distance. The additional benefit of accessibility afforded by video assessment is a new and unique finding for undergraduate midwifery education and may resonate with other educators seeking ways to assess similar skill sets with cohorts of students studying at distance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Snow Cover Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of snow cover from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument...

  11. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Aerosol Detection Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of suspended matter from the Visible Infrared Imaging Radiometer Suite (VIIRS)...

  12. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Sensor Data Record (SDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sensor Data Records (SDRs), or Level 1b data, from the Visible Infrared Imaging Radiometer Suite (VIIRS) are the calibrated and geolocated radiance and reflectance...

  13. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Mask Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains a high quality Environmental Data Record (EDR) of cloud masks from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard...

  14. Identifying FRBR Work-Level Data in MARC Bibliographic Records for Manifestations of Moving Images

    Directory of Open Access Journals (Sweden)

    Lynne Bisko

    2008-12-01

    Full Text Available The library metadata community is dealing with the challenge of implementing the conceptual model, Functional Requirements for Bibliographic Records (FRBR. In response, the Online Audiovisual Catalogers (OLAC created a task force to study the issues related to creating and using FRBR-based work-level records for moving images. This article presents one part of the task force's work: it looks at the feasibility of creating provisional FRBR work-level records for moving images by extracting data from existing manifestation-level bibliographic records. Using a sample of 941 MARC records, a subgroup of the task force conducted a pilot project to look at five characteristics of moving image works. Here they discuss their methodology; analysis; selected results for two elements, original date (year and director name; and conclude with some suggested changes to MARC coding and current cataloging policy.

  15. Analysis of distribution of PSL intensity recorded in imaging plate

    International Nuclear Information System (INIS)

    Oda, Keiji; Tsukahara, Kazutaka; Tada, Hidenori; Yamauchi, Tomoya

    2006-01-01

    Supplementary experiments and theoretical consideration have been performed about a new method for particle identification with an imaging plate, which was proposed in the previous paper. The imaging plate was exposed to 137 Cs γ-rays, 2 MeV- protons accelerated by a tandem Van de Graaff, X-rays emitted from a tube operated under the condition of 20-70 kV, as well as α- and β-rays. The frequency distribution in PSL intensity in a pixel of 100μm x 100μm was measured and the standard deviation was obtained by fitting to a Gaussian. It was confirmed that the relative standard deviation decreased with the average PSL intensity for every radiation species and that the curves were roughly divided into four groups of α-rays, protons, β-rays and photons. In the second step, these data were analyzed by plotting the square of the relative standard deviation against the average PSL intensity in full-log scale, where the relation should be expressed by a straight line with an slope of -1 provided that the deviation could be dominated only by statistical fluctuation. The data for α- and β-rays deviated from a straight line and approached to each saturated value as the average PSL intensity increased. This saturation was considered to be caused by inhomogeneity in the source intensity. It was also out that the value of interception on full-log plot would have important information about PSL reading efficiency, one of characteristic parameters of imaging plate. (author)

  16. A Physical Activity Reference Data-Set Recorded from Older Adults Using Body-Worn Inertial Sensors and Video Technology—The ADAPT Study Data-Set

    Directory of Open Access Journals (Sweden)

    Alan Kevin Bourke

    2017-03-01

    Full Text Available Physical activity monitoring algorithms are often developed using conditions that do not represent real-life activities, not developed using the target population, or not labelled to a high enough resolution to capture the true detail of human movement. We have designed a semi-structured supervised laboratory-based activity protocol and an unsupervised free-living activity protocol and recorded 20 older adults performing both protocols while wearing up to 12 body-worn sensors. Subjects’ movements were recorded using synchronised cameras (≥25 fps, both deployed in a laboratory environment to capture the in-lab portion of the protocol and a body-worn camera for out-of-lab activities. Video labelling of the subjects’ movements was performed by five raters using 11 different category labels. The overall level of agreement was high (percentage of agreement >90.05%, and Cohen’s Kappa, corrected kappa, Krippendorff’s alpha and Fleiss’ kappa >0.86. A total of 43.92 h of activities were recorded, including 9.52 h of in-lab and 34.41 h of out-of-lab activities. A total of 88.37% and 152.01% of planned transitions were recorded during the in-lab and out-of-lab scenarios, respectively. This study has produced the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate (≥25 fps video labelled data recorded in a free-living environment from older adults living independently. This dataset is suitable for validation of existing activity classification systems and development of new activity classification algorithms.

  17. Using Vertical Panoramic Images to Record a Historic Cemetery

    Science.gov (United States)

    Tommaselli, A. M. G.; Polidori, L.; Hasegawa, J. K.; Camargo, P. O.; Hirao, H.; Moraes, M. V. A.; Rissate, E. A., Jr.; Henrique, G. R.; Abreu, P. A. G.; Berveglieri, A.; Marcato, J., Jr.

    2013-07-01

    In 1919, during colonization of the West Region of São Paulo State, Brazil, the Ogassawara family built a cemetery and a school with donations received from the newspaper Osaka Mainichi Shimbum, in Osaka, Japan. The cemetery was closed by President Getúlio Vargas in 1942, during the Second World War. The architecture of the Japanese cemetery is a unique feature in Latin America. Even considering its historical and cultural relevance, there is a lack of geometric documentation about the location and features of the tombs and other buildings within the cemetery. As an alternative to provide detailed and fast georeferenced information about the area, it is proposed to use near vertical panoramic images taken with a digital camera with fisheye lens as the primary data followed by bundle adjustment and photogrammetric restitution. The aim of this paper is to present a feasibility study on the proposed technique with the assessment of the results with a strip of five panoramic images, taken over some graves in the Japanese cemetery. The results showed that a plant in a scale of 1 : 200 can be produced with photogrammetric restitution at a very low cost, when compared to topographic surveying or laser scanning. The paper will address the main advantages of this technique as well as its drawbacks, with quantitative analysis of the results achieved in this experiment.

  18. USING VERTICAL PANORAMIC IMAGES TO RECORD A HISTORIC CEMETERY

    Directory of Open Access Journals (Sweden)

    A. M. G. Tommaselli

    2013-07-01

    Full Text Available In 1919, during colonization of the West Region of São Paulo State, Brazil, the Ogassawara family built a cemetery and a school with donations received from the newspaper Osaka Mainichi Shimbum, in Osaka, Japan. The cemetery was closed by President Getúlio Vargas in 1942, during the Second World War. The architecture of the Japanese cemetery is a unique feature in Latin America. Even considering its historical and cultural relevance, there is a lack of geometric documentation about the location and features of the tombs and other buildings within the cemetery. As an alternative to provide detailed and fast georeferenced information about the area, it is proposed to use near vertical panoramic images taken with a digital camera with fisheye lens as the primary data followed by bundle adjustment and photogrammetric restitution. The aim of this paper is to present a feasibility study on the proposed technique with the assessment of the results with a strip of five panoramic images, taken over some graves in the Japanese cemetery. The results showed that a plant in a scale of 1 : 200 can be produced with photogrammetric restitution at a very low cost, when compared to topographic surveying or laser scanning. The paper will address the main advantages of this technique as well as its drawbacks, with quantitative analysis of the results achieved in this experiment.

  19. Imaging, Health Record, and Artificial Intelligence: Hype or Hope?

    Science.gov (United States)

    Mazzanti, Marco; Shirka, Ervina; Gjergo, Hortensia; Hasimi, Endri

    2018-05-10

    The review is focused on "digital health", which means advanced analytics based on multi-modal data. The "Health Care Internet of Things", which uses sensors, apps, and remote monitoring could provide continuous clinical information in the cloud that enables clinicians to access the information they need to care for patients everywhere. Greater standardization of acquisition protocols will be needed to maximize the potential gains from automation and machine learning. Recent artificial intelligence applications on cardiac imaging will not be diagnosing patients and replacing doctors but will be augmenting their ability to find key relevant data they need to care for a patient and present it in a concise, easily digestible format. Risk stratification will transition from oversimplified population-based risk scores to machine learning-based metrics incorporating a large number of patient-specific clinical and imaging variables in real-time beyond the limits of human cognition. This will deliver highly accurate and individual personalized risk assessments and facilitate tailored management plans.

  20. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Kumar, R.

    2002-01-01

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  1. Ultrafast Holographic Image Recording by Single Shot Femtosecond Spectral Hole Burning

    National Research Council Canada - National Science Library

    Rebane, Aleksander

    2001-01-01

    .... This allowed us to record image holograms with 150-fs duration pulses without need to accumulate the SHB effect from many exposures. Results of this research show that it is possible to perform optical recording of data in frequency-domain on ultrafast time scale. These results can be used also as a new diagnostic tool for femtosecond dynamics in various ultrafast optical interactions.

  2. Video image analysis as a potential grading system for Uruguayan beef carcasses.

    Science.gov (United States)

    Vote, D J; Bowling, M B; Cunha, B C N; Belk, K E; Tatum, J D; Montossi, F; Smith, G C

    2009-07-01

    A study was conducted in 2 phases to evaluate the effectiveness of 1) the VIAscan Beef Carcass System (BCSys; hot carcass system) and the CVS BeefCam (chilled carcass system), used independently or in combination, to predict Uruguayan beef carcass fabrication yields; and 2) the CVS BeefCam to segregate Uruguayan beef carcasses into groups that differ in the Warner-Bratzler shear force (WBSF) values of their LM steaks. The results from the meat yield phase of the present study indicated that the prediction of saleable meat yield percentages from Uruguayan beef carcasses by use of the BCSys or CVS BeefCam is similar to, or slightly better than, the use of USDA yield grade calculated to the nearest 0.1 and was much more effective than prediction based on Uruguay National Institute of Meat (INAC) grades. A further improvement in fabrication yield prediction could be obtained by use of a dual-component video image analysis (VIA) system. Whichever method of VIA prediction of fabrication yield is used, a single predicted value of fabrication yield for every carcass removes an impediment to the implementation of a value-based pricing system. Additionally, a VIA method of predicting carcass yield has the advantage over the current INAC classification system in that estimates would be produced by an instrument rather than by packing plant personnel, which would appeal to cattle producers. Results from the tenderness phase of the study indicated that the CVS BeefCam output variable for marbling was not (P > 0.05) able to segregate steer and heifer carcasses into groups that differed in WBSF values. In addition, the results of segregating steer and heifer carcasses according to muscle color output variables indicate that muscle maturity and skeletal maturity were useful for segregating carcasses according to differences in WBSF values of their steaks (P > 0.05). Use of VIA to predict beef carcass fabrication yields could improve accuracy and reduce subjectivity in comparison

  3. Does sharing the electronic health record in the consultation enhance patient involvement? A mixed-methods study using multichannel video recording and in-depth interviews in primary care.

    Science.gov (United States)

    Milne, Heather; Huby, Guro; Buckingham, Susan; Hayward, James; Sheikh, Aziz; Cresswell, Kathrin; Pinnock, Hilary

    2016-06-01

    Sharing the electronic health-care record (EHR) during consultations has the potential to facilitate patient involvement in their health care, but research about this practice is limited. We used multichannel video recordings to identify examples and examine the practice of screen-sharing within 114 primary care consultations. A subset of 16 consultations was viewed by the general practitioner and/or patient in 26 reflexive interviews. Screen-sharing emerged as a significant theme and was explored further in seven additional patient interviews. Final analysis involved refining themes from interviews and observation of videos to understand how screen-sharing occurred, and its significance to patients and professionals. Eighteen (16%) of 114 videoed consultations involved instances of screen-sharing. Screen-sharing occurred in six of the subset of 16 consultations with interviews and was a significant theme in 19 of 26 interviews. The screen was shared in three ways: 'convincing' the patient of a diagnosis or treatment; 'translating' between medical and lay understandings of disease/medication; and by patients 'verifying' the accuracy of the EHR. However, patients and most GPs perceived the screen as the doctor's domain, not to be routinely viewed by the patient. Screen-sharing can facilitate patient involvement in the consultation, depending on the way in which sharing comes about, but the perception that the record belongs to the doctor is a barrier. To exploit the potential of sharing the screen to promote patient involvement, there is a need to reconceptualise and redesign the EHR. © 2014 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  4. Is it acceptable to video-record palliative care consultations for research and training purposes? A qualitative interview study exploring the views of hospice patients, carers and clinical staff.

    Science.gov (United States)

    Pino, Marco; Parry, Ruth; Feathers, Luke; Faull, Christina

    2017-09-01

    Research using video recordings can advance understanding of healthcare communication and improve care, but making and using video recordings carries risks. To explore views of hospice patients, carers and clinical staff about whether videoing patient-doctor consultations is acceptable for research and training purposes. We used semi-structured group and individual interviews to gather hospice patients, carers and clinical staff views. We used Braun and Clark's thematic analysis. Interviews were conducted at one English hospice to inform the development of a larger video-based study. We invited patients with capacity to consent and whom the care team judged were neither acutely unwell nor severely distressed (11), carers of current or past patients (5), palliative medicine doctors (7), senior nurses (4) and communication skills educators (5). Participants viewed video-based research on communication as valuable because of its potential to improve communication, care and staff training. Video-based research raised concerns including its potential to affect the nature and content of the consultation and threats to confidentiality; however, these were not seen as sufficient grounds for rejecting video-based research. Video-based research was seen as acceptable and useful providing that measures are taken to reduce possible risks across the recruitment, recording and dissemination phases of the research process. Video-based research is an acceptable and worthwhile way of investigating communication in palliative medicine. Situated judgements should be made about when it is appropriate to involve individual patients and carers in video-based research on the basis of their level of vulnerability and ability to freely consent.

  5. A Peer-Reviewed Instructional Video is as Effective as a Standard Recorded Didactic Lecture in Medical Trainees Performing Chest Tube Insertion: A Randomized Control Trial.

    Science.gov (United States)

    Saun, Tomas J; Odorizzi, Scott; Yeung, Celine; Johnson, Marjorie; Bandiera, Glen; Dev, Shelly P

    Online medical education resources are becoming an increasingly used modality and many studies have demonstrated their efficacy in procedural instruction. This study sought to determine whether a standardized online procedural video is as effective as a standard recorded didactic teaching session for chest tube insertion. A randomized control trial was conducted. Participants were taught how to insert a chest tube with either a recorded didactic teaching session, or a New England Journal of Medicine (NEJM) video. Participants filled out a questionnaire before and after performing the procedure on a cadaver, which was filmed and assessed by 2 blinded evaluators using a standardized tool. Western University, London, Ontario. Level of clinical care: institutional. A total of 30 fourth-year medical students from 2 graduating classes at the Schulich School of Medicine & Dentistry were screened for eligibility. Two students did not complete the study and were excluded. There were 13 students in the NEJM group, and 15 students in the didactic group. The NEJM group׳s average score was 45.2% (±9.56) on the prequestionnaire, 67.7% (±12.9) for the procedure, and 60.1% (±7.65) on the postquestionnaire. The didactic group׳s average score was 42.8% (±10.9) on the prequestionnaire, 73.7% (±9.90) for the procedure, and 46.5% (±7.46) on the postquestionnaire. There was no difference between the groups on the prequestionnaire (Δ + 2.4%; 95% CI: -5.16 to 9.99), or the procedure (Δ -6.0%; 95% CI: -14.6 to 2.65). The NEJM group had better scores on the postquestionnaire (Δ + 11.15%; 95% CI: 3.74-18.6). The NEJM video was as effective as video-recorded didactic training for teaching the knowledge and technical skills essential for chest tube insertion. Participants expressed high satisfaction with this modality. It may prove to be a helpful adjunct to standard instruction on the topic. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc

  6. Multicamera High Dynamic Range High-Speed Video of Rocket Engine Tests and Launches

    Data.gov (United States)

    National Aeronautics and Space Administration — High-speed video recording of rocket engine tests has several challenges. The scenes that are imaged have both bright and dark regions associated with plume emission...

  7. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  8. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  9. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  10. Imaging of fast-neutron sources using solid-state track-recorder pinhole radiography

    International Nuclear Information System (INIS)

    Ruddy, F.H.; Gold, R.; Roberts, J.H.; Kaiser, B.J.; Preston, C.C.

    1983-08-01

    Pinhole imaging methods are being developed and tested for potential future use in imaging the intense neutron source of the Fusion Materials Irradiation Test (FMIT) Facility. Previously reported, extensive calibration measurements of the proton, neutron, and alpha particle response characteristics of CR-39 polymer solid state track recorders (SSTRs) are being used to interpret the results of imaging experiments using both charged particle and neutron pinhole collimators. High resolution, neutron pinhole images of a 252 Cf source have been obtained in the form of neutron induced proton recoil tracks in CR-39 polymer SSTR. These imaging experiments are described as well as their potential future applications to FMIT

  11. Genetic relationships between carcass cut weights predicted from video image analysis and other performance traits in cattle.

    Science.gov (United States)

    Pabiou, T; Fikse, W F; Amer, P R; Cromie, A R; Näsholm, A; Berry, D P

    2012-09-01

    The objective of this study was to quantify the genetic associations between a range of carcass-related traits including wholesale cut weights predicted from video image analysis (VIA) technology, and a range of pre-slaughter performance traits in commercial Irish cattle. Predicted carcass cut weights comprised of cut weights based on retail value: lower value cuts (LVC), medium value cuts (MVC), high value cuts (HVC) and very high value cuts (VHVC), as well as total meat, fat and bone weights. Four main sources of data were used in the genetic analyses: price data of live animals collected from livestock auctions, live-weight data and linear type collected from both commercial and pedigree farms as well as from livestock auctions and weanling quality recorded on-farm. Heritability of carcass cut weights ranged from 0.21 to 0.39. Genetic correlations between the cut traits and the other performance traits were estimated using a series of bivariate sire linear mixed models where carcass cut weights were phenotypically adjusted to a constant carcass weight. Strongest positive genetic correlations were obtained between predicted carcass cut weights and carcass value (min r g(MVC) = 0.35; max r(g(VHVC)) = 0.69), and animal price at both weaning (min r(g(MVC)) = 0.37; max r(g(VHVC)) = 0.66) and post weaning (min r(g(MVC)) = 0.50; max r(g(VHVC)) = 0.67). Moderate genetic correlations were obtained between carcass cut weights and calf price (min r g(HVC) = 0.34; max r g(LVC) = 0.45), weanling quality (min r(g(MVC)) = 0.12; max r (g(VHVC)) = 0.49), linear scores for muscularity at both weaning (hindquarter development: min r(g(MVC)) = -0.06; max r(g(VHVC)) = 0.46), post weaning (hindquarter development: min r(g(MVC)) = 0.23; max r(g(VHVC)) = 0.44). The genetic correlations between total meat weight were consistent with those observed with the predicted wholesale cut weights. Total fat and total bone weights were generally negatively correlated with carcass value, auction

  12. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    Science.gov (United States)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported

  13. Dynamic Torsional and Cyclic Fracture Behavior of ProFile Rotary Instruments at Continuous or Reciprocating Rotation as Visualized with High-speed Digital Video Imaging.

    Science.gov (United States)

    Tokita, Daisuke; Ebihara, Arata; Miyara, Kana; Okiji, Takashi

    2017-08-01

    This study examined the dynamic fracture behavior of nickel-titanium rotary instruments in torsional or cyclic loading at continuous or reciprocating rotation by means of high-speed digital video imaging. The ProFile instruments (size 30, 0.06 taper; Dentsply Maillefer, Ballaigues, Switzerland) were categorized into 4 groups (n = 7 in each group) as follows: torsional/continuous (TC), torsional/reciprocating (TR), cyclic/continuous (CC), and cyclic/reciprocating (CR). Torsional loading was performed by rotating the instruments by holding the tip with a vise. For cyclic loading, a custom-made device with a 38° curvature was used. Dynamic fracture behavior was observed with a high-speed camera. The time to fracture was recorded, and the fractured surface was examined with scanning electron microscopy. The TC group initially exhibited necking of the file followed by the development of an initial crack line. The TR group demonstrated opening and closing of a crack according to its rotation in the cutting and noncutting directions, respectively. The CC group separated without any detectable signs of deformation. In the CR group, initial crack formation was recognized in 5 of 7 samples. The reciprocating rotation exhibited a longer time to fracture in both torsional and cyclic fatigue testing (P rotary instruments, as visualized with high-speed digital video imaging, varied between the different modes of rotation and different fatigue testing. Reciprocating rotation induced a slower crack propagation and conferred higher fatigue resistance than continuous rotation in both torsional and cyclic loads. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  14. Towards an automated analysis of video-microscopy images of fungal morphogenesis

    Directory of Open Access Journals (Sweden)

    Diéguez-Uribeondo, Javier

    2005-06-01

    Full Text Available Fungal morphogenesis is an exciting field of cell biology and several mathematical models have been developed to describe it. These models require experimental evidences to be corroborated and, therefore, there is a continuous search for new microscopy and image analysis techniques. In this work, we have used a Canny-edge-detector based technique to automate the generation of hyphal profiles and calculation of morphogenetic parameters such as diameter, elongation rates and hyphoid fitness. The results show that the data obtained with this technique are similar to published data generated with manualbased tracing techniques and that have been carried out on the same species or genus. Thus, we show that application of edge detector-based technique to hyphal growth represents an efficient and accurate method to study hyphal morphogenesis. This represents the first step towards an automated analysis of videomicroscopy images of fungal morphogenesis.La morfogénesis de los hongos es un área de estudio de gran relevancia en la biología celular y en la que se han desarrollado varios modelos matemáticos. Los modelos matemáticos de procesos biológicos precisan de pruebas experimentales que apoyen y corroboren las predicciones teóricas y, por este motivo, existe una búsqueda continua de nuevas técnicas de microscopía y análisis de imágenes para su aplicación en el estudio del crecimiento celular. En este trabajo hemos utilizado una técnica basada en un detector de contornos llamado “Canny-edge-detectorâ€� con el objetivo de automatizar la generación de perfiles de hifas y el cálculo de parámetros morfogenéticos, tales como: el diámetro, la velocidad de elongación y el ajuste con el perfil hifoide, es decir, el perfil teórico de las hifas de los hongos. Los resultados obtenidos son similares a los datos publicados a partir de técnicas manuales de trazado de contornos, generados en la misma especie y género. De esta manera

  15. Dependency of human target detection performance on clutter and quality of supporting image analysis algorithms in a video surveillance task

    Science.gov (United States)

    Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin

    2017-10-01

    Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.

  16. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  17. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or "Just Entertainment"?

    Science.gov (United States)

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2007-01-01

    The aim of this study is to assess late adolescents' evaluations of and reasoning about gender stereotypes in video games. Female (n = 46) and male (n = 41) students, predominantly European American, with a mean age 19 years, are interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences…

  18. Nonlinear analysis and synthesis of video images using deep dynamic bottleneck neural networks for face recognition.

    Science.gov (United States)

    Moghadam, Saeed Montazeri; Seyyedsalehi, Seyyed Ali

    2018-05-31

    Nonlinear components extracted from deep structures of bottleneck neural networks exhibit a great ability to express input space in a low-dimensional manifold. Sharing and combining the components boost the capability of the neural networks to synthesize and interpolate new and imaginary data. This synthesis is possibly a simple model of imaginations in human brain where the components are expressed in a nonlinear low dimensional manifold. The current paper introduces a novel Dynamic Deep Bottleneck Neural Network to analyze and extract three main features of videos regarding the expression of emotions on the face. These main features are identity, emotion and expression intensity that are laid in three different sub-manifolds of one nonlinear general manifold. The proposed model enjoying the advantages of recurrent networks was used to analyze the sequence and dynamics of information in videos. It is noteworthy to mention that this model also has also the potential to synthesize new videos showing variations of one specific emotion on the face of unknown subjects. Experiments on discrimination and recognition ability of extracted components showed that the proposed model has an average of 97.77% accuracy in recognition of six prominent emotions (Fear, Surprise, Sadness, Anger, Disgust, and Happiness), and 78.17% accuracy in the recognition of intensity. The produced videos revealed variations from neutral to the apex of an emotion on the face of the unfamiliar test subject which is on average 0.8 similar to reference videos in the scale of the SSIM method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Video x-ray progressive scanning: new technique for decreasing x-ray exposure without decreasing image quality during cardiac catheterization

    International Nuclear Information System (INIS)

    Holmes, D.R. Jr.; Bove, A.A.; Wondrow, M.A.; Gray, J.E.

    1986-01-01

    A newly developed video x-ray progressive scanning system improves image quality, decreases radiation exposure, and can be added to any pulsed fluoroscopic x-ray system using a video display without major system modifications. With use of progressive video scanning, the radiation entrance exposure rate measured with a vascular phantom was decreased by 32 to 53% in comparison with a conventional fluoroscopic x-ray system. In addition to this substantial decrease in radiation exposure, the quality of the image was improved because of less motion blur and artifact. Progressive video scanning has the potential for widespread application to all pulsed fluoroscopic x-ray systems. Use of this technique should make cardiac catheterization procedures and all other fluoroscopic procedures safer for the patient and the involved medical and paramedical staff

  20. Virtually transparent epidermal imagery (VTEI): on new approaches to in vivo wireless high-definition video and image processing.

    Science.gov (United States)

    Anderson, Adam L; Lin, Bingxiong; Sun, Yu

    2013-12-01

    This work first overviews a novel design, and prototype implementation, of a virtually transparent epidermal imagery (VTEI) system for laparo-endoscopic single-site (LESS) surgery. The system uses a network of multiple, micro-cameras and multiview mosaicking to obtain a panoramic view of the surgery area. The prototype VTEI system also projects the generated panoramic view on the abdomen area to create a transparent display effect that mimics equivalent, but higher risk, open-cavity surgeries. The specific research focus of this paper is on two important aspects of a VTEI system: 1) in vivo wireless high-definition (HD) video transmission and 2) multi-image processing-both of which play key roles in next-generation systems. For transmission and reception, this paper proposes a theoretical wireless communication scheme for high-definition video in situations that require extremely small-footprint image sensors and in zero-latency applications. In such situations the typical optimized metrics in communication schemes, such as power and data rate, are far less important than latency and hardware footprint that absolutely preclude their use if not satisfied. This work proposes the use of a novel Frequency-Modulated Voltage-Division Multiplexing (FM-VDM) scheme where sensor data is kept analog and transmitted via "voltage-multiplexed" signals that are also frequency-modulated. Once images are received, a novel Homographic Image Mosaicking and Morphing (HIMM) algorithm is proposed to stitch images from respective cameras, that also compensates for irregular surfaces in real-time, into a single cohesive view of the surgical area. In VTEI, this view is then visible to the surgeon directly on the patient to give an "open cavity" feel to laparoscopic procedures.

  1. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  2. ScreenRecorder: A Utility for Creating Screenshot Video Using Only Original Equipment Manufacturer (OEM) Software on Microsoft Windows Systems

    Science.gov (United States)

    2015-01-01

    class within Microsoft Visual Studio . 2 It has been tested on and is compatible with Microsoft Vista, 7, and 8 and Visual Studio Express 2008...the ScreenRecorder utility assumes a basic understanding of compiling and running C++ code within Microsoft Visual Studio . This report does not...of Microsoft Visual Studio , the ScreenRecorder utility was developed as a C++ class that can be compiled as a library (static or dynamic) to be

  3. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or "Just Entertainment"?

    Science.gov (United States)

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2007-06-01

    The aim of this study was to assess adolescents' evaluations of, and reasoning about, gender stereotypes in video games. Female ( N = 46) and male ( N = 41), predominantly European-American, mean age = 19 years, were interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences of games on the players, and authority jurisdiction over 3 different types of games: games with negative male stereotypes, and games with negative female stereotypes, and gender-neutral games. Gender differences were found for how participants evaluated these games. Males were more likely than females to find stereotypes acceptable. Results are discussed in terms of social reasoning, video game playing, and gender differences.

  4. Demonstrations of video processing of image data for uranium resource assessments

    International Nuclear Information System (INIS)

    Marrs, R.W.; King, J.K.

    1978-01-01

    Video processing of LANDSAT imagery was performed for nine areas in the western United States to demonstrate the applicability of such analyses for regional uranium resource assessment. The results of these tests, in areas of diverse geology, topography, and vegetation, were mixed. The best success was achieved in arid areas because vegetation cover is extremely limiting in any analysis dealing primarily with rocks and soils. Surface alteration patterns of large areal extent, involving transformation or redistribution of iron oxides, and reflectance contrasts were the only type of alteration consistently detected by video processing of LANDSAT imagery. Alteration often provided the only direct indication of mineralization. Other exploration guides, such as lithologic changes, can often be detected, even in heavily vegetated regions. Structural interpretation of the imagery proved far more successful than spectral analyses as an indicator of regions of possible uranium enrichment

  5. Concurrent Calculations on Reconfigurable Logic Devices Applied to the Analysis of Video Images

    Directory of Open Access Journals (Sweden)

    Sergio R. Geninatti

    2010-01-01

    Full Text Available This paper presents the design and implementation on FPGA devices of an algorithm for computing similarities between neighboring frames in a video sequence using luminance information. By taking advantage of the well-known flexibility of Reconfigurable Logic Devices, we have designed a hardware implementation of the algorithm used in video segmentation and indexing. The experimental results show the tradeoff between concurrent sequential resources and the functional blocks needed to achieve maximum operational speed while achieving minimum silicon area usage. To evaluate system efficiency, we compare the performance of the hardware solution to that of calculations done via software using general-purpose processors with and without an SIMD instruction set.

  6. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or “Just Entertainment”?

    Science.gov (United States)

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2015-01-01

    The aim of this study was to assess adolescents' evaluations of, and reasoning about, gender stereotypes in video games. Female (N = 46) and male (N = 41), predominantly European-American, mean age = 19 years, were interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences of games on the players, and authority jurisdiction over 3 different types of games: games with negative male stereotypes, and games with negative female stereotypes, and gender-neutral games. Gender differences were found for how participants evaluated these games. Males were more likely than females to find stereotypes acceptable. Results are discussed in terms of social reasoning, video game playing, and gender differences. PMID:25722501

  7. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or “Just Entertainment”?

    OpenAIRE

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2007-01-01

    The aim of this study was to assess adolescents' evaluations of, and reasoning about, gender stereotypes in video games. Female (N = 46) and male (N = 41), predominantly European-American, mean age = 19 years, were interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences of games on the players, and authority jurisdiction over 3 different types of games: games with negative male stereotypes, and games with negative female stereotyp...

  8. Video compression and DICOM proxies for remote viewing of DICOM images

    Science.gov (United States)

    Khorasani, Elahe; Sheinin, Vadim; Paulovicks, Brent; Jagmohan, Ashish

    2009-02-01

    Digital medical images are rapidly growing in size and volume. A typical study includes multiple image "slices." These images have a special format and a communication protocol referred to as DICOM (Digital Imaging Communications in Medicine). Storing, retrieving, and viewing these images are handled by DICOM-enabled systems. DICOM images are stored in central repository servers called PACS (Picture Archival and Communication Systems). Remote viewing stations are DICOM-enabled applications that can query the PACS servers and retrieve the DICOM images for viewing. Modern medical images are quite large, reaching as much as 1 GB per file. When the viewing station is connected to the PACS server via a high-bandwidth local LAN, downloading of the images is relatively efficient and does not cause significant wasted time for physicians. Problems arise when the viewing station is located in a remote facility that has a low-bandwidth link to the PACS server. If the link between the PACS and remote facility is in the range of 1 Mbit/sec, downloading medical images is very slow. To overcome this problem, medical images are compressed to reduce the size for transmission. This paper describes a method of compression that maintains diagnostic quality of images while significantly reducing the volume to be transmitted, without any change to the existing PACS servers and viewer software, and without requiring any change in the way doctors retrieve and view images today.

  9. Head-motion-controlled video goggles: preliminary concept for an interactive laparoscopic image display (i-LID).

    Science.gov (United States)

    Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I

    2009-08-01

    Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD

  10. Acoustical holographic recording with coherent optical read-out and image processing

    Science.gov (United States)

    Liu, H. K.

    1980-10-01

    New acoustic holographic wave memory devices have been designed for real-time in-situ recording applications. The basic operating principles of these devices and experimental results through the use of some of the prototypes of the devices are presented. Recording media used in the device include thermoplastic resin, Crisco vegetable oil, and Wilson corn oil. In addition, nonlinear coherent optical image processing techniques including equidensitometry, A-D conversion, and pseudo-color, all based on the new contact screen technique, are discussed with regard to the enhancement of the normally poor-resolved acoustical holographic images.

  11. Gastroesophageal Reflux and Body Movement in Infants: Investigations with Combined Impedance-pH and Synchronized Video Recording

    Directory of Open Access Journals (Sweden)

    Tobias G. Wenzl

    2011-01-01

    Full Text Available The aim of this paper was to investigate the temporal association of gastroesophageal reflux (GER and body movement in infants. GER were registered by combined impedance-pH, documentation of body movement was done by video. Videorecording time (Vt was divided into “resting time” and “movement time” and analyzed for occurrence of GER. Association was defined as movement 1 minute before/after the beginning of a GER. Statistical evaluation was by Fisher's exact test. In 15 infants, 341 GER were documented during Vt (86 hours. 336 GER (99% were associated with movement, only 5 episodes (1% occured during resting time. Movement was significantly associated with the occurrence of GER (<.0001. There is a strong temporal association between GER and body movement in infants. However, a clear distinction between cause and effect could not be made with the chosen study design. Combined impedance-pH has proven to be the ideal technique for this approach.

  12. Inter- and intra-specific diurnal habitat selection of zooplankton during the spring bloom observed by Video Plankton Recorder

    DEFF Research Database (Denmark)

    Sainmont, Julie; Gislason, Astthor; Heuschele, Jan

    2014-01-01

    Recorder (VPR), a tool that allows mapping of vertical zooplankton distributions with a far greater spatial resolution than conventional zooplankton nets. The study took place over a full day–night cycle in Disko Bay, Greenland, during the peak of the phytoplankton spring bloom. The sampling revealed...

  13. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions.

    Science.gov (United States)

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.

  14. Obstacles delaying the prompt deployment of piston-type mechanical cardiopulmonary resuscitation devices during emergency department resuscitation: a video-recording and time-motion study.

    Science.gov (United States)

    Huang, Edward Pei-Chuan; Wang, Hui-Chih; Ko, Patrick Chow-In; Chang, Anna Marie; Fu, Chia-Ming; Chen, Jiun-Wei; Liao, Yen-Chen; Liu, Hung-Chieh; Fang, Yao-De; Yang, Chih-Wei; Chiang, Wen-Chu; Ma, Matthew Huei-Ming; Chen, Shyr-Chyr

    2013-09-01

    The quality of cardiopulmonary resuscitation (CPR) is important to survival after cardiac arrest. Mechanical devices (MD) provide constant CPR, but their effectiveness may be affected by deployment timeliness. To identify the timeliness of the overall and of each essential step in the deployment of a piston-type MD during emergency department (ED) resuscitation, and to identify factors associated with delayed MD deployment by video recordings. Between December 2005 and December 2008, video clips from resuscitations with CPR sessions using a MD in the ED were reviewed using time-motion analyses. The overall deployment timeliness and the time spent on each essential step of deployment were measured. There were 37 CPR recordings that used a MD. Deployment of MD took an average 122.6 ± 57.8s. The 3 most time-consuming steps were: (1) setting the device (57.8 ± 38.3s), (2) positioning the patient (33.4 ± 38.0 s), and (3) positioning the device (14.7 ± 9.5s). Total no flow time was 89.1 ± 41.2s (72.7% of total time) and associated with the 3 most time-consuming steps. There was no difference in the total timeliness, no-flow time, and no-flow ratio between different rescuer numbers, time of day of the resuscitation, or body size of patients. Rescuers spent a significant amount of time on MD deployment, leading to long no-flow times. Lack of familiarity with the device and positioning strategy were associated with poor performance. Additional training in device deployment strategies are required to improve the benefits of mechanical CPR. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. New method for identifying features of an image on a digital video display

    Science.gov (United States)

    Doyle, Michael D.

    1991-04-01

    The MetaMap process extends the concept of direct manipulation human-computer interfaces to new limits. Its specific capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. The correlation is accomplished through reprogramming of both the color map and the image so that discrete image elements comprise unique sets of color indices. This process allows the correlation to be accomplished with very efficient data storage and program execution times. Image databases adapted to this process become object-oriented as a result. Very sophisticated interrelationships can be set up between images text and program control mechanisms using this process. An application of this interfacing process to the design of an interactive atlas of medical histology as well as other possible applications are described. The MetaMap process is protected by U. S. patent #4

  16. A METHOD FOR RECORDING AND VIEWING STEREOSCOPIC IMAGES IN COLOUR USING MULTICHROME FILTERS

    DEFF Research Database (Denmark)

    2000-01-01

    in a conventional stereogram recorded of the scene. The invention makes use of a colour-based encoding technique and viewing filters selected so that the human observer receives, in one eye, an image of nearly full colour information, in the other eye, an essentially monochrome image supplying the parallactic......The aim of the invention is to create techniques for the encoding, production and viewing of stereograms, supplemented by methods for selecting certain optical filters needed in these novel techniques, thus providing a human observer with stereograms each of which consist of a single image...

  17. Hydrogen peroxide plasma sterilization of a waterproof, high-definition video camera case for intraoperative imaging in veterinary surgery.

    Science.gov (United States)

    Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan

    2018-06-13

    To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.

  18. Anthropocentric Video Segmentation for Lecture Webcasts

    Directory of Open Access Journals (Sweden)

    Rojas Raul

    2007-01-01

    Full Text Available Abstract Many lecture recording and presentation systems transmit slides or chalkboard content along with a small video of the instructor. As a result, two areas of the screen are competing for the viewer's attention, causing the widely known split-attention effect. Face and body gestures, such as pointing, do not appear in the context of the slides or the board. To eliminate this problem, this article proposes to extract the lecturer from the video stream and paste his or her image onto the board or slide image. As a result, the lecturer acting in front of the board or slides becomes the center of attention. The entire lecture presentation becomes more human-centered. This article presents both an analysis of the underlying psychological problems and an explanation of signal processing techniques that are applied in a concrete system. The presented algorithm is able to extract and overlay the lecturer online and in real time at full video resolution.

  19. Anthropocentric Video Segmentation for Lecture Webcasts

    Directory of Open Access Journals (Sweden)

    Raul Rojas

    2008-03-01

    Full Text Available Many lecture recording and presentation systems transmit slides or chalkboard content along with a small video of the instructor. As a result, two areas of the screen are competing for the viewer's attention, causing the widely known split-attention effect. Face and body gestures, such as pointing, do not appear in the context of the slides or the board. To eliminate this problem, this article proposes to extract the lecturer from the video stream and paste his or her image onto the board or slide image. As a result, the lecturer acting in front of the board or slides becomes the center of attention. The entire lecture presentation becomes more human-centered. This article presents both an analysis of the underlying psychological problems and an explanation of signal processing techniques that are applied in a concrete system. The presented algorithm is able to extract and overlay the lecturer online and in real time at full video resolution.

  20. Image-based electronic patient records for secured collaborative medical applications.

    Science.gov (United States)

    Zhang, Jianguo; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Yao, Yihong; Cai, Weihua; Jin, Jin; Zhang, Guozhen; Sun, Kun

    2005-01-01

    We developed a Web-based system to interactively display image-based electronic patient records (EPR) for secured intranet and Internet collaborative medical applications. The system consists of four major components: EPR DICOM gateway (EPR-GW), Image-based EPR repository server (EPR-Server), Web Server and EPR DICOM viewer (EPR-Viewer). In the EPR-GW and EPR-Viewer, the security modules of Digital Signature and Authentication are integrated to perform the security processing on the EPR data with integrity and authenticity. The privacy of EPR in data communication and exchanging is provided by SSL/TLS-based secure communication. This presentation gave a new approach to create and manage image-based EPR from actual patient records, and also presented a way to use Web technology and DICOM standard to build an open architecture for collaborative medical applications.

  1. Using neutrosophic graph cut segmentation algorithm for qualified rendering image selection in thyroid elastography video.

    Science.gov (United States)

    Guo, Yanhui; Jiang, Shuang-Quan; Sun, Baiqing; Siuly, Siuly; Şengür, Abdulkadir; Tian, Jia-Wei

    2017-12-01

    Recently, elastography has become very popular in clinical investigation for thyroid cancer detection and diagnosis. In elastogram, the stress results of the thyroid are displayed using pseudo colors. Due to variation of the rendering results in different frames, it is difficult for radiologists to manually select the qualified frame image quickly and efficiently. The purpose of this study is to find the qualified rendering result in the thyroid elastogram. This paper employs an efficient thyroid ultrasound image segmentation algorithm based on neutrosophic graph cut to find the qualified rendering images. Firstly, a thyroid ultrasound image is mapped into neutrosophic set, and an indeterminacy filter is constructed to reduce the indeterminacy of the spatial and intensity information in the image. A graph is defined on the image and the weight for each pixel is represented using the value after indeterminacy filtering. The segmentation results are obtained using a maximum-flow algorithm on the graph. Then the anatomic structure is identified in thyroid ultrasound image. Finally the rendering colors on these anatomic regions are extracted and validated to find the frames which satisfy the selection criteria. To test the performance of the proposed method, a thyroid elastogram dataset is built and totally 33 cases were collected. An experienced radiologist manually evaluates the selection results of the proposed method. Experimental results demonstrate that the proposed method finds the qualified rendering frame with 100% accuracy. The proposed scheme assists the radiologists to diagnose the thyroid diseases using the qualified rendering images.

  2. Unattended video surveillance systems for international safeguards

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1979-01-01

    The use of unattended video surveillance systems places some unique requirements on the systems and their hardware. The systems have the traditional requirements of video imaging, video storage, and video playback but also have some special requirements such as tamper safing. The technology available to meet these requirements and how it is being applied to unattended video surveillance systems are discussed in this paper

  3. Video-rate confocal microscopy for single-molecule imaging in live cells and superresolution fluorescence imaging.

    Science.gov (United States)

    Lee, Jinwoo; Miyanaga, Yukihiro; Ueda, Masahiro; Hohng, Sungchul

    2012-10-17

    There is no confocal microscope optimized for single-molecule imaging in live cells and superresolution fluorescence imaging. By combining the swiftness of the line-scanning method and the high sensitivity of wide-field detection, we have developed a, to our knowledge, novel confocal fluorescence microscope with a good optical-sectioning capability (1.0 μm), fast frame rates (fluorescence detection efficiency. Full compatibility of the microscope with conventional cell-imaging techniques allowed us to do single-molecule imaging with a great ease at arbitrary depths of live cells. With the new microscope, we monitored diffusion motion of fluorescently labeled cAMP receptors of Dictyostelium discoideum at both the basal and apical surfaces and obtained superresolution fluorescence images of microtubules of COS-7 cells at depths in the range 0-85 μm from the surface of a coverglass. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  4. Condylar guidance: correlation between protrusive interocclusal record and panoramic radiographic image: a pilot study.

    Science.gov (United States)

    Tannamala, Pavan Kumar; Pulagam, Mahesh; Pottem, Srinivas R; Swapna, B

    2012-04-01

    The purpose of this study was to compare the sagittal condylar angles set in the Hanau articulator by use of a method of obtaining an intraoral protrusive record to those angles found using a panoramic radiographic image. Ten patients, free of signs and symptoms of temporomandibular disorder and with intact dentition were selected. The dental stone casts of the subjects were mounted on a Hanau articulator with a springbow and poly(vinyl siloxane) interocclusal records. For all patients, the protrusive records were obtained when the mandible moved forward by approximately 6 mm. All procedures for recording, mounting, and setting were done in the same session. The condylar guidance angles obtained were tabulated. A panoramic radiographic image of each patient was made with the Frankfurt horizontal plane parallel to the floor of the mouth. Tracings of the radiographic images were made. The horizontal reference line was marked by joining the orbitale and porion. The most superior and most inferior points of the curvatures were identified. These two lines were connected by a straight line representing the mean curvature line. Angles made by the intersection of the mean curvature line and the horizontal reference line were measured. The results were subjected to statistical analysis with a significance level of p record method. The mean condylar guidance angle between the right and left side by both the methods was not statistically significant. The comparison of mean condylar guidance angles between the right side of the protrusive record method and the right side of the panoramic radiographic method and the left side of the protrusive record method and the left side of the panoramic radiographic method (p= 0.071 and p= 0.057, respectively) were not statistically significant. Within the limitations of this study, it was concluded that the protrusive condylar guidance angles obtained by panoramic radiograph may be used in programming semi-adjustable articulators. © 2012

  5. An alternative effective method for verifying the multileaf collimator leaves speed by using a digital-video imaging system

    International Nuclear Information System (INIS)

    Hwang, Ing-Ming; Wu, Jay; Chuang, Keh-Shih; Ding, Hueisch-Jy

    2010-01-01

    We present an alternative effective method for verifying the multileaf collimator (MLC) leaves speed using a digital-video imaging system in daily dynamic conformal radiation therapy (DCRT) and intensity-modulation radiation therapy (IMRT) in achieving increased convenience and shorter treatment times. The horizontal leaves speed measured was within 1.76-2.08 cm/s. The mean full range of traveling time was 20 s. The initial speed-up time was within 1.5-2.0 s, and the slowing-down time was within 2.0-2.5 s. Due to gravity the maximum speed-up effect in the X1 bank was +0.10 cm/s, but the lagging effect in the X2 bank was -0.20 cm/s. This technique offered an alternative method with electronic portal imaging device (EPID), charged coupled device (CCD) or a light field for the measurement of MLC leaves speed. When time taken on the linac was kept to a minimum, the image could be processed off-line.

  6. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  7. Photoplethysmography Signal Analysis for Optimal Region-of-Interest Determination in Video Imaging on a Built-In Smartphone under Different Conditions

    Directory of Open Access Journals (Sweden)

    Yunyoung Nam

    2017-10-01

    Full Text Available Smartphones and tablets are widely used in medical fields, which can improve healthcare and reduce healthcare costs. Many medical applications for smartphones and tablets have already been developed and widely used by both health professionals and patients. Specifically, video recordings of fingertips made using a smartphone camera contain a pulsatile component caused by the cardiac pulse equivalent to that present in a photoplethysmographic signal. By performing peak detection on the pulsatile signal, it is possible to estimate a continuous heart rate and a respiratory rate. To estimate the heart rate and respiratory rate accurately, which pixel regions of the color bands give the most optimal signal quality should be investigated. In this paper, we investigate signal quality to determine the best signal quality by the largest amplitude values for three different smartphones under different conditions. We conducted several experiments to obtain reliable PPG signals and compared the PPG signal strength in the three color bands when the flashlight was both on and off. We also evaluated the intensity changes of PPG signals obtained from the smartphones with motion artifacts and fingertip pressure force. Furthermore, we have compared the PSNR of PPG signals of the full-size images with that of the region of interests (ROIs.

  8. Fast mega pixels video imaging of a toroidal plasma in KT5D device

    International Nuclear Information System (INIS)

    Xu Min; Wang Zhijiang; Lu Ronghua; Sun Xiang; Wen Yizhi; Yu Changxuan; Wan Shude; Liu Wandong; Wang Jun; Xiao Delong; Yu Yi; Zhu Zhenghua; Hu Linyin

    2005-01-01

    A direct imaging system, viewing visible light emission from plasmas tangentially or perpendicularly, has been set up on the KT5D toroidal device to monitor the real two-dimensional profiles of purely ECR generated plasmas. This system has a typical spatial resolution of 0.2 mm (1280x1024 pixels) when imaging the whole cross section. Interesting features of ECR plasmas have been found. Different from what classical theories have expected, a resonance layer with two or three bright spots, rather than an even vertical band, has been observed. In addition, images also indicate an intermittent splitting and drifting character of the plasmas

  9. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  10. Learning Trajectory for Transforming Teachers' Knowledge for Teaching Mathematics and Science with Digital Image and Video Technologies in an Online Learning Experience

    Science.gov (United States)

    Niess, Margaret L.; Gillow-Wiles, Henry

    2014-01-01

    This qualitative cross-case study explores the influence of a designed learning trajectory on transforming teachers' technological pedagogical content knowledge (TPACK) for teaching with digital image and video technologies. The TPACK Learning Trajectory embeds tasks with specific instructional strategies within a social metacognitive…

  11. DEVELOPMENT OF EVALUATION OF A QUANTITATIVE VIDEO-FLUORESCENCE IMAGING SYSTEM AND FLUORESCENT TRACER FOR MEASURING TRANSFER OF PESTICIDE RESIDUES FROM SURFACES TO HANDS WITH REPEATED CONTACTS

    Science.gov (United States)

    A video imaging system and the associated quantification methods have been developed for measurement of the transfers of a fluorescent tracer from surfaces to hands. The highly fluorescent compound riboflavin (Vitamin B2), which is also water soluble and non-toxic, was chosen as...

  12. Global adjustment for creating extended panoramic images in video-dermoscopy

    Science.gov (United States)

    Faraz, Khuram; Blondel, Walter; Daul, Christian

    2017-07-01

    This contribution presents a fast global adjustment scheme exploiting SURF descriptor locations for constructing large skin mosaics. Precision in pairwise image registration is well-preserved while significantly reducing the global mosaicing error.

  13. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  14. Preparation of photo an video images during foot diagnostics in stress condition

    International Nuclear Information System (INIS)

    Katsarov, V; Stoyanov, K.; Panchev, P.; Belcheva, J.; Atanasov, A.

    2008-01-01

    The aim of this work is to present some practical issues concerning image scanning, processing and software application in orthopedics and traumatology for foot diagnostic purposes. Basic concepts in optical scanning, multi-position photography and technology with high informational value have been discussed. The use of Slide show, Clip and Mpeg graphic formats during preparation for capture and image processing has been also demonstrated

  15. Calculation of entrance exposed area from recorded images in cardiac diagnostic and interventional procedures

    International Nuclear Information System (INIS)

    Bibbo, G.; Balman, D.

    2000-01-01

    With increasing number of interventional radiological procedures performed on patients of all ages, it is important to determine the skin entrance dose of patients to limit the side effects of radiation. In most cases the skin dose is measured using thermoluminescent detectors (TLD). However, these detectors need to be placed in the radiation field on the skin of the patient, which may interfere with the procedure. Also, not all radiological practices are equipped with TLD readers which are expensive or have staff with the appropriate knowledge and expertise to be able to make use of TLD. The alternative to TLD is to use the dose area product (DAP) measured with a diamentor fitted to the angiography x-ray equipment. The difficulties in using DAP to calculate skin dose is that the irradiated area of the skin is not known. The area could change in size and location during the procedure as the radiologist/medical specialist varies the collimation and region of interest. For angiography equipment the distance between the anode and image intensifier is variable, as is the height of the examination table. The only point of reference is the isocentre. With recorded images it is possible to determine the irradiated area of the patient at the isocentre plane using the stenosis algorithm. The recorded image is calibrated such that it corresponds to the physical size in the plane of the isocentre. For non-recorded images, it may be necessary to assume that collimation has not changed and that the irradiated area is the same as that shown on the recorded images. The Women's and Children's Hospital has a Toshiba DFP2000 Biplane Digital Imaging system used for all cardiac and general angiography and interventional procedures. With this system the exposure factors (kVp, mA, field sizes) are recorded with the images. The source to image distance (SID), magnification factor (calibration factor of the recorded images) and angle of rotation are displayed on the Display Panel of the

  16. Ladder beam and camera video recording system for evaluating forelimb and hindlimb deficits after sensorimotor cortex injury in rats.

    Science.gov (United States)

    Soblosky, J S; Colgin, L L; Chorney-Lane, D; Davidson, J F; Carey, M E

    1997-12-30

    Hindlimb and forelimb deficits in rats caused by sensorimotor cortex lesions are frequently tested by using the narrow flat beam (hindlimb), the narrow pegged beam (hindlimb and forelimb) or the grid-walking (forelimb) tests. Although these are excellent tests, the narrow flat beam generates non-parametric data so that using more powerful parametric statistical analyses are prohibited. All these tests can be difficult to score if the rat is moving rapidly. Foot misplacements, especially on the grid-walking test, are indicative of an ongoing deficit, but have not been reliably and accurately described and quantified previously. In this paper we present an easy to construct and use horizontal ladder-beam with a camera system on rails which can be used to evaluate both hindlimb and forelimb deficits in a single test. By slow motion videotape playback we were able to quantify and demonstrate foot misplacements which go beyond the recovery period usually seen using more conventional measures (i.e. footslips and footfaults). This convenient system provides a rapid and reliable method for recording and evaluating rat performance on any type of beam and may be useful for measuring sensorimotor recovery following brain injury.

  17. Learning how to rate video-recorded therapy sessions: a practical guide for trainees and advanced clinicians.

    Science.gov (United States)

    McCullough, Leigh; Bhatia, Maneet; Ulvenes, Pal; Berggraf, Lene; Osborn, Kristin

    2011-06-01

    Watching and rating psychotherapy sessions is an important yet often overlooked component of psychotherapy training. This article provides a simple and straightforward guide for using one Website (www.ATOStrainer.com) that provides an automated training protocol for rating of psychotherapy sessions. By the end of the article, readers will be able to have the knowledge to go to the Website and begin using this training method as soon as they have a recorded session to view. This article presents, (a) an overview of the Achievement of Therapeutic Objectives Scale (ATOS; McCullough et al., 2003a), a research tool used to rate psychotherapy sessions; (b) a description of APA training tapes, available for purchase from APA Books, that have been rated and scored by ATOS trained clinicians and posted on the Website; (c) step-by-step procedures on how ratings can be done; (d) an introduction to www.ATOStrainer.com where ratings can be entered and compared with expert ratings; and (e) first-hand personal experiences of the authors using this training method and the benefits it affords both trainees and experienced therapists. This psychotherapy training Website has the potential to be a key resource tool for graduate students, researchers, and clinicians. Our long-range goal is to promote the growth of our understanding of psychotherapy and to improve the quality of psychotherapy provided for patients.

  18. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  19. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  20. Imaging of Volume Phase Gratings in a Photosensitive Polymer, Recorded in Transmission and Reflection Geometry

    Directory of Open Access Journals (Sweden)

    Tina Sabel

    2014-02-01

    Full Text Available Volume phase gratings, recorded in a photosensitive polymer by two-beam interference exposure, are studied by means of optical microscopy. Transmission gratings and reflection gratings, with periods in the order of 10 μm down to 130 nm, were investigated. Mapping of holograms by means of imaging in sectional view is introduced to study reflection-type gratings, evading the resolution limit of classical optical microscopy. In addition, this technique is applied to examine so-called parasitic gratings, arising from interference from the incident reference beam and the reflected signal beam. The appearance and possible avoidance of such unintentionally recorded secondary structures is discussed.

  1. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-10-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  2. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    International Nuclear Information System (INIS)

    Quanqing, Zhu.; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-01-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images

  3. Fuzzy-Based Segmentation for Variable Font-Sized Text Extraction from Images/Videos

    Directory of Open Access Journals (Sweden)

    Samabia Tehsin

    2014-01-01

    Full Text Available Textual information embedded in multimedia can provide a vital tool for indexing and retrieval. A lot of work is done in the field of text localization and detection because of its very fundamental importance. One of the biggest challenges of text detection is to deal with variation in font sizes and image resolution. This problem gets elevated due to the undersegmentation or oversegmentation of the regions in an image. The paper addresses this problem by proposing a solution using novel fuzzy-based method. This paper advocates postprocessing segmentation method that can solve the problem of variation in text sizes and image resolution. The methodology is tested on ICDAR 2011 Robust Reading Challenge dataset which amply proves the strength of the recommended method.

  4. Enlargement device of an image part contained in a video signal

    International Nuclear Information System (INIS)

    Bossaert, J.; Bodelet, P.; Tomietto, T.

    1994-01-01

    To filter a signal delivered in an interlaced manner, it is foreseen to introduce in series one filter on half frame having a pass-band transfer function in the horizontal plane and a pass-high transfer function in the vertical plane. This filter carries out on the global image signal a general pass-band transfer function. All is managed so that the central frequency of this pass-band filter fits with an elaborate image resolution. By acting so the contours of structures can be enhanced. The method applies particularly to medical radiography. 3 refs., 5 figs

  5. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees

    Directory of Open Access Journals (Sweden)

    Paula Jimena Ramos Giraldo

    2017-04-01

    Full Text Available Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  6. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees.

    Science.gov (United States)

    Giraldo, Paula Jimena Ramos; Aguirre, Álvaro Guerrero; Muñoz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio

    2017-04-06

    Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: ( i ) Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and ( ii ) Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  7. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  8. Subjective evaluation of the accuracy of video imaging prediction following orthognathic surgery in Chinese patients

    NARCIS (Netherlands)

    Chew, Ming Tak; Koh, Chay Hui; Sandham, John; Wong, Hwee Bee

    Purpose: The aims of this retrospective study were to assess the subjective accuracy of predictions generated by a computer imaging software in Chinese patients who had undergone orthognathic surgery and to determine the influence of initial dysgnathia and complexity of the surgical procedure on

  9. Recording soft-X-ray images with photographic materials at large gamma background

    International Nuclear Information System (INIS)

    Izrailev, I.M.

    1993-01-01

    The sensitivity of photographic materials to soft X-rays and 60 Co γ-quanta when developed by visible light and a chemical developer is investigated. When the photographic paper is developed by visible light, its sensitivity is reduced by 200-300 times independent of the quantum energy. This method allows an X-ray image to be recorded even when there is γ-background of 10 5 R. 2 refs., 1 tab

  10. The role of records management professionals in optical disk-based document imaging systems in the petroleum industry

    International Nuclear Information System (INIS)

    Cisco, S.L.

    1992-01-01

    Analyses of the data indicated that nearly one third of the 83 companies in this study had implemented one or more document imaging systems. Companies with imaging systems mostly were large (more than 1,001 employees), and mostly were international in scope. Although records management professionals traditionally were delegated responsibility for acquiring, designing, implementing, and maintaining paper-based information systems and the records therein, when records were converted to optical disks, responsibility for acquiring, designing, implementing, and maintaining optical disk-based information systems and the records therein, was delegated more frequently to end user departments and IS/MIS/DP professionals than to records professionals. Records management professionals assert that the need of an organization for a comprehensive records management program is not served best when individuals who are not professional records managers are responsible for the records stored in optical disk-based information systems

  11. Recent advances in recording electrophysiological data simultaneously with magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Laufs, H. [Univ Frankfurt, Zentrum Neurol and Neurochirurg, Neurol Klin, D-60590 Frankfurt (Germany); Laufs, H. [Univ Frankfurt, Dept Neurol, D-60590 Frankfurt (Germany); Laufs, H. [Univ Frankfurt, Brain Imaging Ctr, D-60590 Frankfurt (Germany); Laufs, H.; Carmichael, D.W. [UCL, Inst Neurol, Dept Clin and Expt Epilepsy, London (United Kingdom); Daunizeau, J. [Wellcome Trust Ctr Neuroimaging, London (United Kingdom); Kleinschmidt, A. [INSERM, Unite 562, F-91191 Gif SurYvette (France); Kleinschmidt, A. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Kleinschmidt, A. [Univ Paris 11, F-91405 Orsay (France)

    2008-07-01

    Simultaneous recording of brain activity by different neuro-physiological modalities can yield insights that reach beyond those obtained by each technique individually, even when compared to those from the post-hoc integration of results from each technique recorded sequentially. Success in the endeavour of real-time multimodal experiments requires special hardware and software as well as purpose-tailored experimental design and analysis strategies. Here,we review the key methodological issues in recording electrophysiological data in humans simultaneously with magnetic resonance imaging (MRI), focusing on recent technical and analytical advances in the field. Examples are derived from simultaneous electro-encephalography (EEG) and electromyography (EMG) during functional MRI in cognitive and systems neuroscience as well as in clinical neurology, in particular in epilepsy and movement disorders. We conclude with an outlook on current and future efforts to achieve true integration of electrical and haemodynamic measures of neuronal activity using data fusion models. (authors)

  12. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  13. Exploring the clinical decision-making used by experienced cardiorespiratory physiotherapists: A mixed method qualitative design of simulation, video recording and think aloud techniques.

    Science.gov (United States)

    Thackray, Debbie; Roberts, Lisa

    2017-02-01

    The ability of physiotherapists to make clinical decisions is a vital component of being an autonomous practitioner, yet this complex phenomenon has been under-researched in cardiorespiratory physiotherapy. The purpose of this study was to explore clinical decision-making (CDM) by experienced physiotherapists in a scenario of a simulated patient experiencing acute deterioration of their respiratory function. The main objective of this observational study was to identify the actions, thoughts, and behaviours used by experienced cardiorespiratory physiotherapists in their clinical decision-making processes. A mixed-methods (qualitative) design employing observation and think-aloud, was adopted using a computerised manikin in a simulated environment. The participants clinically assessed the manikin programmed with the same clinical signs, under standardised conditions in the clinical skills practice suite, which was set up as a ward environment. Experienced cardiorespiratory physiotherapists, recruited from clinical practice within a 50-mile radius of the University(*). Participants were video-recorded throughout the assessment and treatment and asked to verbalise their thought processes using the 'think-aloud' method. The recordings were transcribed verbatim and managed using a Framework approach. Eight cardiorespiratory physiotherapists participated (mean 7years clinical experience, range 3.5-16years. CDM was similar to the collaborative hypothetico-deductive model, five-rights nursing model, reasoning strategies, inductive reasoning and pattern recognition. However, the CDM demonstrated by the physiotherapists was complex, interactive and iterative. Information processing occurred continuously throughout the whole interaction with the patient, and the specific cognitive skills of recognition, matching, discriminating, relating, inferring, synthesising and prediction were identified as being used sequentially. The findings from this study were used to develop a new

  14. Image-guided recording system for spatial and temporal mapping of neuronal activities in brain slice.

    Science.gov (United States)

    Choi, Geonho; Lee, Jeonghyeon; Kim, Hyeongeun; Jang, Jaemyung; Im, Changkyun; Jeon, Nooli; Jung, Woonggyu

    2018-03-01

    In this study, we introduce the novel image-guided recording system (IGRS) for efficient interpretation of neuronal activities in the brain slice. IGRS is designed to combine microelectrode array (MEA) and optical coherence tomography at the customized upright microscope. It allows to record multi-site neuronal signals and image of the volumetric brain anatomy in a single body configuration. For convenient interconnection between a brain image and neuronal signals, we developed the automatic mapping protocol that enables us to project acquired neuronal signals on a brain image. To evaluate the performance of IGRS, hippocampal signals of the brain slice were monitored, and corresponding with two-dimensional neuronal maps were successfully reconstructed. Our results indicated that IGRS and mapping protocol can provide the intuitive information regarding long-term and multi-sites neuronal signals. In particular, the temporal and spatial mapping capability of neuronal signals would be a very promising tool to observe and analyze the massive neuronal activity and connectivity in MEA-based electrophysiological studies. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. A standardized imaging protocol for the endoscopic prediction of dysplasia within sessile serrated polyps (with video).

    Science.gov (United States)

    Tate, David J; Jayanna, Mahesh; Awadie, Halim; Desomer, Lobke; Lee, Ralph; Heitman, Steven J; Sidhu, Mayenaaz; Goodrick, Kathleen; Burgess, Nicholas G; Mahajan, Hema; McLeod, Duncan; Bourke, Michael J

    2018-01-01

    Dysplasia within sessile serrated polyps (SSPs) is difficult to detect and may be mistaken for an adenoma, risking incomplete resection of the background serrated tissue, and is strongly implicated in interval cancer after colonoscopy. The use of endoscopic imaging to detect dysplasia within SSPs has not been systematically studied. Consecutively detected SSPs ≥8 mm in size were evaluated by using a standardized imaging protocol at a tertiary-care endoscopy center over 3 years. Lesions suspected as SSPs were analyzed with high-definition white light then narrow-band imaging. A demarcated area with a neoplastic pit pattern (Kudo type III/IV, NICE type II) was sought among the serrated tissue. If this was detected, the lesion was labeled dysplastic (sessile serrated polyp with dysplasia); if not, it was labeled non-dysplastic (sessile serrated polyp without dysplasia). Histopathology was reviewed by 2 blinded specialist GI pathologists. A total of 141 SSPs were assessed in 83 patients. Median lesion size was 15.0 mm (interquartile range 10-20), and 54.6% were in the right side of the colon. Endoscopic evidence of dysplasia was detected in 36 of 141 (25.5%) SSPs; of these, 5 of 36 (13.9%) lacked dysplasia at histopathology. Two of 105 (1.9%) endoscopically designated non-dysplastic SSPs had dysplasia at histopathology. Endoscopic imaging, therefore, had an accuracy of 95.0% (95% confidence interval [CI], 90.1%-97.6%) and a negative predictive value of 98.1% (95% CI, 92.6%-99.7%) for detection of dysplasia within SSPs. Dysplasia within SSPs can be detected accurately by using a simple, broadly applicable endoscopic imaging protocol that allows complete resection. Independent validation of this protocol and its dissemination to the wider endoscopic community may have a significant impact on rates of interval cancer. (Clinical trial registration number: NCT03100552.). Copyright © 2018 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All

  16. Signal recovery in imaging photoplethysmography

    International Nuclear Information System (INIS)

    Holton, Benjamin D; Mannapperuma, Kavan; Lesniewski, Peter J; Thomas, John C

    2013-01-01

    Imaging photoplethysmography is an emerging technique for the extraction of biometric information from people using video recordings. The focus is on extracting the cardiac heart rate of the subject by analysing the luminance of the colour video signal and identifying periodic components. Advanced signal processing is needed to recover the information required. In this paper, independent component analysis (ICA), principal component analysis, auto- and cross-correlation are investigated and compared with respect to their effectiveness in extracting the relevant information from video recordings. Results obtained are compared with those recorded by a modern commercial finger pulse oximeter. It is found that ICA produces the most consistent results. (paper)

  17. Signal recovery in imaging photoplethysmography.

    Science.gov (United States)

    Holton, Benjamin D; Mannapperuma, Kavan; Lesniewski, Peter J; Thomas, John C

    2013-11-01

    Imaging photoplethysmography is an emerging technique for the extraction of biometric information from people using video recordings. The focus is on extracting the cardiac heart rate of the subject by analysing the luminance of the colour video signal and identifying periodic components. Advanced signal processing is needed to recover the information required. In this paper, independent component analysis (ICA), principal component analysis, auto- and cross-correlation are investigated and compared with respect to their effectiveness in extracting the relevant information from video recordings. Results obtained are compared with those recorded by a modern commercial finger pulse oximeter. It is found that ICA produces the most consistent results.

  18. Visualizing Music: The Archaeology of Music-Video.

    Science.gov (United States)

    Berg, Charles M.

    Music videos, with their characteristic visual energy and frenetic music-and-dance numbers, have caught on rapidly since their introduction in 1981, bringing prosperity to a slumping record industry. Creating images to accompany existing music is, however, hardly a new idea. The concept can be traced back to 1877 and Thomas Edison's invention of…

  19. Use of PIT tag and underwater video recording in assessing estuarine fish movement in a high intertidal mangrove and salt marsh creek

    Science.gov (United States)

    Meynecke, Jan-Olaf; Poole, Geoffrey C.; Werry, Jonathan; Lee, Shing Yip

    2008-08-01

    We assessed movement patterns in relation to habitat availability (reflected by the extent of tidal flooding) for several commercially and recreationally important species in and out of a small mangrove creek within the subtropical Burrum River estuary (25°10'S 152°37'E) in Queensland, Australia. Movement patterns of Acanthopagrus australis, Pomadasys kaakan, Lutjanus russelli and Mugil cephalus were examined between December 2006 and April 2007 using a stationary passive integrated transponder (PIT) system adapted for saline environments (30-38 ppt) and underwater digital video cameras (DVCs). This is the second known application of a stationary PIT tag system to studying fish movement in estuarine environments. The transponder system was set in place for 104 days and recorded >5000 detections. Overall 'recapture' rate of tagged fish by the transponder system was >40%. We used PIT tags implanted in a total of 75 fish from a tidal creek connected to the main channel of the estuary. We also developed a high-resolution digital elevation (2.5 m cell size) model of the estuary derived from airborne light detection and ranging (LIDAR) and aerial imagery to estimate inundation dynamics within the tidal creek, and related the timing of inundation in various habitats to the timing of fish immigration to and emigration from the creek. Over 50% of all tagged fish were moving in and out of the creek at a threshold level when 50% of the mangrove forest became flooded. Individuals of all four species moved into and out of the tidal creek repeatedly at different times depending on species and size, indicating strong residential behaviour within the estuary. The main activity of fishes was at night time. Manual interpretation of video from >700 fish sightings at three different mangrove sites confirmed the findings of the stationary PIT system, that the function of shelter vs food in mangrove habitat may be size dependent. Our established techniques assess the spatial ecology

  20. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  1. Digital image processing of arterial thrombi images, recorded by light transmission

    International Nuclear Information System (INIS)

    Nyssen, M.; Blockeel, E.; Bourgain, R.

    1985-01-01

    For several years, the formation and evolution of thrombi in small arteries of rats has been quantitatively studied at the Laboratory of Physiology and Physiopathology at the V.U.B. Global size parameters can be determined by projecting the image of a small arterial segment onto photosensitive cells. The transmitted light intensity is a measure for the thrombotic phenomenon. This unique method permitted extensive in vivo study of the platelet vessel wall interaction and local thrombosis. A development has emerged with the aim to improve the resolution of these measurements in order to get information on texture and form of the thrombotic mass at any stage of its evolution. In the particular situation studied, the dispersive properties of the flowing blood were found to be highly anisotropic. An explanation for this phenomenon could be given by considering the alignment of red blood cells in the blood flow. In order to explain the measured intensity profiles, the authors postulated alignment in the plane perpendicular to the flow as well. The theoretical predictions are in good agreement with the experimental values if we assume almost perfect alignment of the erythrocytes such that their short axes are pointing in the direction of the center of the artery. Conclusive evidence of the interaction between local flow properties and light transmission could be found by observing arteries with perturbated flow

  2. Multisensor fusion in gastroenterology domain through video and echo endoscopic image combination: a challenge

    Science.gov (United States)

    Debon, Renaud; Le Guillou, Clara; Cauvin, Jean-Michel; Solaiman, Basel; Roux, Christian

    2001-08-01

    Medical domain makes intensive use of information fusion. In particular, the gastro-enterology is a discipline where physicians have the choice between several imagery modalities that offer complementary advantages. Among all existing systems, videoendoscopy (based on a CCD sensor) and echoendoscopy (based on an ultrasound sensor) are the most efficient. The use of each system corresponds to a given step in the physician diagnostic elaboration. Nowadays, several works aim to achieve automatic interpretation of videoendoscopic sequences. These systems can quantify color and superficial textures of the digestive tube. Unfortunately the relief information, which is important for the diagnostic, is very difficult to retrieve. On the other hand, some studies have proved that 3D information can be easily quantified using echoendoscopy image sequences. That is why the idea to combine these information, acquired from two very different points of view, can be considered as a real challenge for the medical image fusion topic. In this paper, after a review of actual works concerning numerical exploitation of videoendoscopy and echoendoscopy, the following question will be discussed: how can the use of complementary aspects of the different systems ease the automatic exploitation of videoendoscopy ? In a second time, we will evaluate the feasibility of the achievement of a realistic 3D reconstruction based both on information given by echoendoscopy (relief) and videoendoscopy (texture). Enumeration of potential applications of such a fusion system will then follow. Further discussions and perspectives will conclude this first study.

  3. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  4. Quantifying fish swimming behavior in response to acute exposure of aqueous copper using computer assisted video and digital image analysis

    Science.gov (United States)

    Calfee, Robin D.; Puglis, Holly J.; Little, Edward E.; Brumbaugh, William G.; Mebane, Christopher A.

    2016-01-01

    Behavioral responses of aquatic organisms to environmental contaminants can be precursors of other effects such as survival, growth, or reproduction. However, these responses may be subtle, and measurement can be challenging. Using juvenile white sturgeon (Acipenser transmontanus) with copper exposures, this paper illustrates techniques used for quantifying behavioral responses using computer assisted video and digital image analysis. In previous studies severe impairments in swimming behavior were observed among early life stage white sturgeon during acute and chronic exposures to copper. Sturgeon behavior was rapidly impaired and to the extent that survival in the field would be jeopardized, as fish would be swept downstream, or readily captured by predators. The objectives of this investigation were to illustrate protocols to quantify swimming activity during a series of acute copper exposures to determine time to effect during early lifestage development, and to understand the significance of these responses relative to survival of these vulnerable early lifestage fish. With mortality being on a time continuum, determining when copper first affects swimming ability helps us to understand the implications for population level effects. The techniques used are readily adaptable to experimental designs with other organisms and stressors.

  5. A New Learning Control System for Basketball Free Throws Based on Real Time Video Image Processing and Biofeedback

    Directory of Open Access Journals (Sweden)

    R. Sarang

    2018-02-01

    Full Text Available Shooting free throws plays an important role in basketball. The major problem in performing a correct free throw seems to be inappropriate training. Training is performed offline and it is often not that persistent. The aim of this paper is to consciously modify and control the free throw using biofeedback. Elbow and shoulder dynamics are calculated by an image processing technique equipped with a video image acquisition system. The proposed setup in this paper, named learning control system, is able to quantify and provide feedback of the above parameters in real time as audio signals. Therefore, it yielded to performing a correct learning and conscious control of shooting. Experimental results showed improvements in the free throw shooting style including shot pocket and locked position. The mean values of elbow and shoulder angles were controlled approximately on 89o and 26o, for shot pocket and also these angles were tuned approximately on 180o and 47o respectively for the locked position (closed to the desired pattern of the free throw based on valid FIBA references. Not only the mean values enhanced but also the standard deviations of these angles decreased meaningfully, which shows shooting style convergence and uniformity. Also, in training conditions, the average percentage of making successful free throws increased from about 64% to even 87% after using this setup and in competition conditions the average percentage of successful free throws enhanced about 20%, although using the learning control system may not be the only reason for these outcomes. The proposed system is easy to use, inexpensive, portable and real time applicable.

  6. Imaging and recording subventricular zone progenitor cells in live tissue of postnatal mice

    Directory of Open Access Journals (Sweden)

    Benjamin Lacar

    2010-07-01

    Full Text Available The subventricular zone (SVZ is one of two regions where neurogenesis persists in the postnatal brain. The SVZ, located along the lateral ventricle, is the largest neurogenic zone in the brain that contains multiple cell populations including astrocyte-like cells and neuroblasts. Neuroblasts migrate in chains to the olfactory bulb where they differentiate into interneurons. Here, we discuss the experimental approaches to record the electrophysiology of these cells and image their migration and calcium activity in acute slices. Although these techniques were in place for studying glial cells and neurons in mature networks, the SVZ raises new challenges due to the unique properties of SVZ cells, the cellular diversity, and the architecture of the region. We emphasize different methods, such as the use of transgenic mice and in vivo electroporation that permit identification of the different SVZ cell populations for patch clamp recording or imaging. Electroporation also permits genetic labeling of cells using fluorescent reporter mice and modification of the system using either RNA interference technology or floxed mice. In this review, we aim to provide conceptual and technical details of the approaches to perform electrophysiological and imaging studies of SVZ cells.

  7. Automated in-core image generation from video to aid visual inspection of nuclear power plant cores

    Energy Technology Data Exchange (ETDEWEB)

    Murray, Paul, E-mail: paul.murray@strath.ac.uk [Department of Electronic and Electrical Engineering, University of Strathclyde, Technology and Innovation Centre, 99 George Street, Glasgow, G1 1RD (United Kingdom); West, Graeme; Marshall, Stephen; McArthur, Stephen [Dept. Electronic and Electrical Engineering, University of Strathclyde, Royal College Building, 204 George Street, Glasgow G1 1XW (United Kingdom)

    2016-04-15

    Highlights: • A method is presented which improves visual inspection of reactor cores. • Significant time savings are made to activities on the critical outage path. • New information is extracted from existing data sources without additional overhead. • Examples from industrial case studies across the UK fleet of AGR stations. - Abstract: Inspection and monitoring of key components of nuclear power plant reactors is an essential activity for understanding the current health of the power plant and ensuring that they continue to remain safe to operate. As the power plants age, and the components degrade from their initial start-of-life conditions, the requirement for more and more detailed inspection and monitoring information increases. Deployment of new monitoring and inspection equipment on existing operational plant is complex and expensive, as the effect of introducing new sensing and imaging equipment to the existing operational functions needs to be fully understood. Where existing sources of data can be leveraged, the need for new equipment development and installation can be offset by the development of advanced data processing techniques. This paper introduces a novel technique for creating full 360° panoramic images of the inside surface of fuel channels from in-core inspection footage. Through the development of this technique, a number of technical challenges associated with the constraints of using existing equipment have been addressed. These include: the inability to calibrate the camera specifically for image stitching; dealing with additional data not relevant to the panorama construction; dealing with noisy images; and generalising the approach to work with two different capture devices deployed at seven different Advanced Gas Cooled Reactor nuclear power plants. The resulting data processing system is currently under formal assessment with a view to replacing the existing manual assembly of in-core defect montages. Deployment of the

  8. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  9. Monochromatic blue light entrains diel activity cycles in the Norway lobster, Nephrops norvegicus (L. as measured by automated video-image analysis

    Directory of Open Access Journals (Sweden)

    Jacopo Aguzzi

    2009-12-01

    Full Text Available There is growing interest in developing automated, non-invasive techniques for long-lasting, laboratory-based monitoring of behaviour in organisms from deep-water continental margins which are of ecological and commercial importance. We monitored the burrow emergence rhythms in the Norway lobster, Nephrops norvegicus, which included: a characterising the regulation of behavioural activity outside the burrow under monochromatic blue light-darkness (LD cycles of 0.1 lx, recreating slope photic conditions (i.e. 200-300 m depth and constant darkness (DD, which is necessary for the study of the circadian system; b testing the performance of a newly designed digital video-image analysis system for tracking locomotor activity. We used infrared USB web cameras and customised software (in Matlab 7.1 to acquire and process digital frames of eight animals at a rate of one frame per minute under consecutive photoperiod stages for nine days each: LD, DD, and LD (subdivided into two stages, LD1 and LD2, for analysis purposes. The automated analysis allowed the production of time series of locomotor activity based on movements of the animals’ centroids. Data were studied with periodogram, waveform, and Fourier analyses. For the first time, we report robust diurnal burrow emergence rhythms during the LD period, which became weak in DD. Our results fit with field data accounting for midday peaks in catches at the depth of slopes. The comparison of the present locomotor pattern with those recorded at different light intensities clarifies the regulation of the clock of N. norvegicus at different depths.

  10. AN APPARATUS AND A METHOD OF RECORDING AN IMAGE OF AN OBJECT

    DEFF Research Database (Denmark)

    1999-01-01

    The invention relates to a method of recording an image of an object (103) using an electronic camera (102), one or more light sources (104), and means for light distribution (105), where light emitted from the light sources (104) is distributed to illuminate the object (103), light being reflected...... to the camera (102). In the light distribution, an integrating cavity (106) is used to whose inner side (107) a light reflecting coating has been applied, and which is provided with first and second openings (109, 110). The camera (102) is placed in alignment with the first opening (109) so that the optical...

  11. Electrostatic X-ray image recording device with mesh-base photocathode photoelectron discriminator means

    International Nuclear Information System (INIS)

    1977-01-01

    An electrostatic X-ray image recording device having a pair of spaced electrodes with a gas-filled gap therebetween, and including discrimination means, having a conductive mesh supporting a photocathodic material, positioned in the gas-filled gap between a first electrode having a layer of ultraviolet-emitting fluorescent material and a second electrode having a plastic sheet adjacent thereto for receiving photoelectrons emitted by the photocathodic material and accelerated to the second electrode by an applied field. The photoconductor-mesh element discriminates against fast electrons, produced by direct impingement of X-rays upon the photocathode to substantially reduce secondary electron production and amplification, thereby increasing both the signal-to-noise and contrast ratios. The electrostatic image formed on the plastic sheet is developed by zerographic techniques after exposure. (Auth.)

  12. Dependence of reconstructed image characteristics on the observation condition in light-in-flight recording by holography.

    Science.gov (United States)

    Komatsu, Aya; Awatsuji, Yasuhiro; Kubota, Toshihiro

    2005-08-01

    We analyze the dependence of the reconstructed image characteristic on the observation condition in the light-in-flight recording by holography both theoretically and experimentally. This holography makes it possible to record a propagating light pulse. We have found that the shape of the reconstructed image is changed when the observation position is vertically moved along the hologram plane. The reconstructed image is numerically simulated on the basis of the theory and is experimentally obtained by using a 373 fs pulsed laser. The numerical results agree with the experimental result, and the validity of the theory is verified. Also, experimental results are analyzed and the restoration of the reconstructed image is discussed.

  13. Web tools for effective retrieval, visualization, and evaluation of cardiology medical images and records

    Science.gov (United States)

    Masseroli, Marco; Pinciroli, Francesco

    2000-12-01

    To provide easy retrieval, integration and evaluation of multimodal cardiology images and data in a web browser environment, distributed application technologies and java programming were used to implement a client-server architecture based on software agents. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. The client side is a Java applet running in a web browser and providing a friendly medical user interface to perform queries on patient and medical test dat and integrate and visualize properly the various query results. A set of tools based on Java Advanced Imaging API enables to process and analyze the retrieved cardiology images, and quantify their features in different regions of interest. The platform-independence Java technology makes the developed prototype easy to be managed in a centralized form and provided in each site where an intranet or internet connection can be located. Giving the healthcare providers effective tools for querying, visualizing and evaluating comprehensively cardiology medical images and records in all locations where they can need them- i.e. emergency, operating theaters, ward, or even outpatient clinics- the developed prototype represents an important aid in providing more efficient diagnoses and medical treatments.

  14. Integration of Transport-relevant Data within Image Record of the Surveillance System

    Directory of Open Access Journals (Sweden)

    Adam Stančić

    2016-10-01

    Full Text Available Integration of the collected information on the road within the image recorded by the surveillance system forms a unified source of transport-relevant data about the supervised situation. The basic assumption is that the procedure of integration changes the image to the extent that is invisible to the human eye, and the integrated data keep identical content. This assumption has been proven by studying the statistical properties of the image and integrated data using mathematical model modelled in the programming language Python using the combinations of the functions of additional libraries (OpenCV, NumPy, SciPy and Matplotlib. The model has been used to compare the input methods of meta-data and methods of steganographic integration by correcting the coefficients of Discrete Cosine Transform JPEG compressed image. For the procedures of steganographic data processing the steganographic algorithm F5 was used. The review paper analyses the advantages and drawbacks of the integration methods and present the examples of situations in traffic in which the formed unified sources of transport-relevant information could be used.

  15. Digital imaging and electronic patient records in pathology using an integrated department information system with PACS.

    Science.gov (United States)

    Kalinski, Thomas; Hofmann, Harald; Franke, Dagmar-Sybilla; Roessner, Albert

    2002-01-01

    Picture archiving and communication systems have been widely used in radiology thus far. Owing to the progress made in digital photo technology, their use in medicine opens up further opportunities. In the field of pathology, digital imaging offers new possiblities for the documentation of macroscopic and microscopic findings. Digital imaging has the advantage that the data is permanently and readily available, independent of conventional archives. In the past, PACS was a separate entity. Meanwhile, however, PACS has been integrated in DIS, the department information system, which was also run separately in former times. The combination of these two systems makes the administration of patient data, findings and images easier. Moreover, thanks to the introduction of special communication standards, a data exchange between different department information systems and hospital information systems (HIS) is possible. This provides the basis for a communication platform in medicine, constituting an electronic patient record (EPR) that permits an interdisciplinary treatment of patients by providing data of findings and images from clinics treating the same patient. As the pathologic diagnosis represents a central and often therapy-determining component, it is of utmost importance to add pathologic diagnoses to the EPR. Furthermore, the pathologist's work is considerably facilitated when he is able to retrieve additional data from the patient file. In this article, we describe our experience gained with the combined PACS and DIS systems recently installed at the Department of Pathology, University of Magdeburg. Moreover, we evaluate the current situation and future prospects for PACS in pathology.

  16. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  17. Talking Video in 'Everyday Life'

    DEFF Research Database (Denmark)

    McIlvenny, Paul

    For better or worse, video technologies have made their way into many domains of social life, for example in the domain of therapeutics. Techniques such as Marte Meo, Video Interaction Guidance (ViG), Video-Enhanced Reflection on Communication, Video Home Training and Video intervention....../prevention (VIP) all promote the use of video as a therapeutic tool. This paper focuses on media therapeutics and the various in situ uses of video technologies in the mass media for therapeutic purposes. Reality TV parenting programmes such as Supernanny, Little Angels, The House of Tiny Tearaways, Honey, We...... observation and instruction (directives) relayed across different spaces; 2) the use of recorded video by participants to visualise, spatialise and localise talk and action that is distant in time and/or space; 3) the translating, stretching and cutting of social experience in and through the situated use...

  18. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Near Constant Contrast (NCC) Imagery Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  19. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Base Height (CBH) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of Cloud Base Heights (CBH) from the Visible Infrared Imaging Radiometer Suite...

  20. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Type and Phase Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of cloud type and phase from the Visible Infrared Imaging Radiometer Suite (VIIRS)...

  1. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Land Surface Temperature (LST) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of Land Surface Temperature (LST) from the Visible Infrared Imaging Radiometer Suite...

  2. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Cover Layer (CCL) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality Environmental Data Record (EDR) of Cloud Cover Layers (CCL) from the Visible Infrared Imaging Radiometer Suite (VIIRS)...

  3. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Optical Thickness (COT) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of Cloud Optical Thickness (COT) from the Visible Infrared Imaging Radiometer Suite...

  4. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Ice Thickness and Age Environmental Data Records (EDRs) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of Ice Thickness and Age from the Visible Infrared Imaging Radiometer Suite (VIIRS)...

  5. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Ice Surface Temperature (IST) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  6. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Top Height (CTH) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  7. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Top Temperature (CTT) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  8. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Effective Particle Size (CEPS) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of Cloud Effective Particle Size (CEPS) from the Visible Infrared Imaging Radiometer...

  9. JPSS NOAA Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Top Pressure (CTP) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  10. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Sea Ice Characterization (SIC) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains an Environmental Data Record (EDR) of Sea Ice Characterization (SIC) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument...

  11. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Cloud Height (Top and Base) Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of cloud height (top and base) from the Visible Infrared Imaging Radiometer Suite...

  12. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Ocean Color/Chlorophyll (OCC) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of Ocean Color/Chlorophyll (OCC) from the Visible Infrared Imaging Radiometer Suite...

  13. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Volcanic Ash Detection and Height Environmental Data Record (EDR) from NDE

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) of volcanic ash from the Visible Infrared Imaging Radiometer (VIIRS) instrument...

  14. NOAA JPSS Visible Infrared Imaging Radiometer Suite (VIIRS) Imagery (not Near Constant Contrast) Environmental Data Record (EDR) from IDPS

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains a high quality operational Environmental Data Record (EDR) from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument onboard the...

  15. Video pedagogy

    OpenAIRE

    Länsitie, Janne; Stevenson, Blair; Männistö, Riku; Karjalainen, Tommi; Karjalainen, Asko

    2016-01-01

    The short film is an introduction to the concept of video pedagogy. The five categories of video pedagogy further elaborate how videos can be used as a part of instruction and learning process. Most pedagogical videos represent more than one category. A video itself doesn’t necessarily define the category – the ways in which the video is used as a part of pedagogical script are more defining factors. What five categories did you find? Did you agree with the categories, or are more...

  16. Decoding of digital magnetic recording with longitudinal magnetization of a tape from a magneto-optical image of stray fields

    Science.gov (United States)

    Lisovskii, F. V.; Mansvetova, E. G.

    2017-05-01

    For digital magnetic recording of encoded information with longitudinal magnetization of the tape, the connection between the domain structure of a storage medium and magneto-optical image of its stray fields obtained using a magnetic film with a perpendicular anisotropy and a large Faraday rotation has been studied. For two-frequency binary code without returning to zero, an algorithm is developed, that allows uniquely decoding of the information recorded on the tape based on analysis of an image of stray fields.

  17. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... findings: 1) They are based on a collaborative approach. 2) The sketches act as a mean to externalizing hypotheses and assumptions among the participants. Based on our analysis we present an overview of factors involved in collaborative video sketching and shows how the factors relate to steps, where...... the participants: shape, record, review and edit their work, leading the participants to new insights about their work....

  18. Reprocessing the Historical Satellite Passive Microwave Record at Enhanced Spatial Resolutions using Image Reconstruction

    Science.gov (United States)

    Hardman, M.; Brodzik, M. J.; Long, D. G.; Paget, A. C.; Armstrong, R. L.

    2015-12-01

    Beginning in 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Currently available global gridded passive microwave data sets serve a diverse community of hundreds of data users, but do not meet many requirements of modern Earth System Data Records (ESDRs) or Climate Data Records (CDRs), most notably in the areas of intersensor calibration, quality-control, provenance and consistent processing methods. The original gridding techniques were relatively primitive and were produced on 25 km grids using the original EASE-Grid definition that is not easily accommodated in modern software packages. Further, since the first Level 3 data sets were produced, the Level 2 passive microwave data on which they were based have been reprocessed as Fundamental CDRs (FCDRs) with improved calibration and documentation. We are funded by NASA MEaSUREs to reprocess the historical gridded data sets as EASE-Grid 2.0 ESDRs, using the most mature available Level 2 satellite passive microwave (SMMR, SSM/I-SSMIS, AMSR-E) records from 1978 to the present. We have produced prototype data from SSM/I and AMSR-E for the year 2003, for review and feedback from our Early Adopter user community. The prototype data set includes conventional, low-resolution ("drop-in-the-bucket" 25 km) grids and enhanced-resolution grids derived from the two candidate image reconstruction techniques we are evaluating: 1) Backus-Gilbert (BG) interpolation and 2) a radiometer version of Scatterometer Image Reconstruction (SIR). We summarize our temporal subsetting technique, algorithm tuning parameters and computational costs, and include sample SSM/I images at enhanced resolutions of up to 3 km. We are actively

  19. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  20. Image Analysis of Eccentric Photorefraction

    Directory of Open Access Journals (Sweden)

    J. Dušek

    2004-01-01

    Full Text Available This article deals with image and data analysis of the recorded video-sequences of strabistic infants. It describes a unique noninvasive measuring system based on two measuring methods (position of I. Purkynje image with relation to the centre of the lens and eccentric photorefraction for infants. The whole process is divided into three steps. The aim of the first step is to obtain video sequences on our special system (Eye Movement Analyser. Image analysis of the recorded sequences is performed in order to obtain curves of basic eye reactions (accommodation and convergence. The last step is to calibrate of these curves to corresponding units (diopter and degrees of movement.

  1. A digital video tracking system

    Science.gov (United States)

    Giles, M. K.

    1980-01-01

    The Real-Time Videotheodolite (RTV) was developed in connection with the requirement to replace film as a recording medium to obtain the real-time location of an object in the field-of-view (FOV) of a long focal length theodolite. Design philosophy called for a system capable of discriminatory judgment in identifying the object to be tracked with 60 independent observations per second, capable of locating the center of mass of the object projection on the image plane within about 2% of the FOV in rapidly changing background/foreground situations, and able to generate a predicted observation angle for the next observation. A description is given of a number of subsystems of the RTV, taking into account the processor configuration, the video processor, the projection processor, the tracker processor, the control processor, and the optics interface and imaging subsystem.

  2. Time-lapse video sysem used to study nesting gyrfalcons

    Science.gov (United States)

    Booms, Travis; Fuller, Mark R.

    2003-01-01

    We used solar-powered time-lapse video photography to document nesting Gyrfalcon (Falco rusticolus) food habits in central West Greenland from May to July in 2000 and 2001. We collected 2677.25 h of videotape from three nests, representing 94, 87, and 49% of the nestling period at each nest. The video recorded 921 deliveries of 832 prey items. We placed 95% of the items into prey categories. The image quality was good but did not reveal enough detail to identify most passerines to species. We found no evidence that Gyrfalcons were negatively affected by the video system after the initial camera set-up. The video system experienced some mechanical problems but proved reliable. The system likely can be used to effectively document the food habits and nesting behavior of other birds, especially those delivering large prey to a nest or other frequently used site.

  3. Content-based video indexing and searching with wavelet transformation

    Science.gov (United States)

    Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah

    2006-05-01

    Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.

  4. MR-eyetracker: a new method for eye movement recording in functional magnetic resonance imaging.

    Science.gov (United States)

    Kimmig, H; Greenlee, M W; Huethe, F; Mergner, T

    1999-06-01

    We present a method for recording saccadic and pursuit eye movements in the magnetic resonance tomograph designed for visual functional magnetic resonance imaging (fMRI) experiments. To reliably classify brain areas as pursuit or saccade related it is important to carefully measure the actual eye movements. For this purpose, infrared light, created outside the scanner by light-emitting diodes (LEDs), is guided via optic fibers into the head coil and onto the eye of the subject. Two additional fiber optical cables pick up the light reflected by the iris. The illuminating and detecting cables are mounted in a plastic eyepiece that is manually lowered to the level of the eye. By means of differential amplification, we obtain a signal that covaries with the horizontal position of the eye. Calibration of eye position within the scanner yields an estimate of eye position with a resolution of 0.2 degrees at a sampling rate of 1000 Hz. Experiments are presented that employ echoplanar imaging with 12 image planes through visual, parietal and frontal cortex while subjects performed saccadic and pursuit eye movements. The distribution of BOLD (blood oxygen level dependent) responses is shown to depend on the type of eye movement performed. Our method yields high temporal and spatial resolution of the horizontal component of eye movements during fMRI scanning. Since the signal is purely optical, there is no interaction between the eye movement signals and the echoplanar images. This reasonably priced eye tracker can be used to control eye position and monitor eye movements during fMRI.

  5. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system

    Energy Technology Data Exchange (ETDEWEB)

    Ebe, Kazuyu, E-mail: nrr24490@nifty.com; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji [Joetsu General Hospital, 616 Daido-Fukuda, Joetsu-shi, Niigata 943-8507 (Japan); Sugimoto, Satoru [Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421 (Japan); Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi [Graduate School of Medical and Dental Sciences, Niigata University, Niigata 951-8510 (Japan); Court, Laurence [The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)

    2015-08-15

    Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors

  6. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system

    International Nuclear Information System (INIS)

    Ebe, Kazuyu; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji; Sugimoto, Satoru; Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi; Court, Laurence

    2015-01-01

    Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors

  7. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  8. Context indexing of digital cardiac ultrasound records in PACS

    Science.gov (United States)

    Lobodzinski, S. Suave; Meszaros, Georg N.

    1998-07-01

    Recent wide adoption of the DICOM 3.0 standard by ultrasound equipment vendors created a need for practical clinical implementations of cardiac imaging study visualization, management and archiving, DICOM 3.0 defines only a logical and physical format for exchanging image data (still images, video, patient and study demographics). All DICOM compliant imaging studies must presently be archived on a 650 Mb recordable compact disk. This is a severe limitation for ultrasound applications where studies of 3 to 10 minutes long are a common practice. In addition, DICOM digital echocardiography objects require physiological signal indexing, content segmentation and characterization. Since DICOM 3.0 is an interchange standard only, it does not define how to database composite video objects. The goal of this research was therefore to address the issues of efficient storage, retrieval and management of DICOM compliant cardiac video studies in a distributed PACS environment. Our Web based implementation has the advantage of accommodating both DICOM defined entity-relation modules (equipment data, patient data, video format, etc.) in standard relational database tables and digital indexed video with its attributes in an object relational database. Object relational data model facilitates content indexing of full motion cardiac imaging studies through bi-directional hyperlink generation that tie searchable video attributes and related objects to individual video frames in the temporal domain. Benefits realized from use of bi-directionally hyperlinked data models in an object relational database include: (1) real time video indexing during image acquisition, (2) random access and frame accurate instant playback of previously recorded full motion imaging data, and (3) time savings from faster and more accurate access to data through multiple navigation mechanisms such as multidimensional queries on an index, queries on a hyperlink attribute, free search and browsing.

  9. Medical video server construction.

    Science.gov (United States)

    Dańda, Jacek; Juszkiewicz, Krzysztof; Leszczuk, Mikołaj; Loziak, Krzysztof; Papir, Zdzisław; Sikora, Marek; Watza, Rafal

    2003-01-01

    The paper discusses two implementation options for a Digital Video Library, a repository used for archiving, accessing, and browsing of video medical records. Two crucial issues to be decided on are a video compression format and a video streaming platform. The paper presents numerous decision factors that have to be taken into account. The compression formats being compared are DICOM as a format representative for medical applications, both MPEGs, and several new formats targeted for an IP networking. The comparison includes transmission rates supported, compression rates, and at least options for controlling a compression process. The second part of the paper presents the ISDN technique as a solution for provisioning of tele-consultation services between medical parties that are accessing resources uploaded to a digital video library. There are several backbone techniques (like corporate LANs/WANs, leased lines or even radio/satellite links) available, however, the availability of network resources for hospitals was the prevailing choice criterion pointing to ISDN solutions. Another way to provide access to the Digital Video Library is based on radio frequency domain solutions. The paper describes possibilities of both, wireless and cellular network's data transmission service to be used as a medical video server transport layer. For the cellular net-work based solution two communication techniques are used: Circuit Switched Data and Packet Switched Data.

  10. First results on video meteors from Crete, Greece

    Science.gov (United States)

    Maravelias, G.

    2012-01-01

    This work presents the first systematic video meteor observations from a, forthcoming permanent, station in Crete, Greece, operating as the first official node within the International Meteor Organization's Video Network. It consists of a Watec 902 H2 Ultimate camera equipped with a Panasonic WV-LA1208 (focal length 12mm, f/0.8) lens running MetRec. The system operated for 42 nights during 2011 (August 19-December 30, 2011) recording 1905 meteors. It is significantly more performant than a previous system used by the author during the Perseids 2010 (DMK camera 21AF04.AS by The Imaging Source, CCTV lens of focal length 2.8 mm, UFO Capture v2.22), which operated for 17 nights (August 4-22, 2010) recording 32 meteors. Differences - according to the author's experience - between the two softwares (MetRec, UFO Capture) are discussed along with a small guide to video meteor hardware.

  11. Feedback on video recorded consultations in medical teaching: why students loathe and love it – a focus-group based qualitative study

    Directory of Open Access Journals (Sweden)

    Baerheim Anders

    2005-07-01

    Full Text Available Abstract Background Feedback on videotaped consultations is a useful way to enhance consultation skills among medical students. The method is becoming increasingly common, but is still not widely implemented in medical education. One obstacle might be that many students seem to consider this educational approach a stressful experience and are reluctant to participate. In order to improve the process and make it more acceptable to the participants, we wanted to identify possible problems experienced by students when making and receiving feedback on their video taped consultations. Methods Nineteen of 75 students at the University of Bergen, Norway, participating in a consultation course in their final term of medical school underwent focus group interviews immediately following a video-based feedback session. The material was audio-taped, transcribed, and analysed by phenomenological qualitative analysis. Results The study uncovered that some students experienced emotional distress before the start of the course. They were apprehensive and lacking in confidence, expressing fear about exposing lack of skills and competence in front of each other. The video evaluation session and feedback process were evaluated positively however, and they found that their worries had been exaggerated. The video evaluation process also seemed to help strengthen the students' self esteem and self-confidence, and they welcomed this. Conclusion Our study provides insight regarding the vulnerability of students receiving feedback from videotaped consultations and their need for reassurance and support in the process, and demonstrates the importance of carefully considering the design and execution of such educational programs.

  12. A Picture is Worth 1,000 Words. The Use of Clinical Images in Electronic Medical Records.

    Science.gov (United States)

    Ai, Angela C; Maloney, Francine L; Hickman, Thu-Trang; Wilcox, Allison R; Ramelson, Harley; Wright, Adam

    2017-07-12

    To understand how clinicians utilize image uploading tools in a home grown electronic health records (EHR) system. A content analysis of patient notes containing non-radiological images from the EHR was conducted. Images from 4,000 random notes from July 1, 2009 - June 30, 2010 were reviewed and manually coded. Codes were assigned to four properties of the image: (1) image type, (2) role of image uploader (e.g. MD, NP, PA, RN), (3) practice type (e.g. internal medicine, dermatology, ophthalmology), and (4) image subject. 3,815 images from image-containing notes stored in the EHR were reviewed and manually coded. Of those images, 32.8% were clinical and 66.2% were non-clinical. The most common types of the clinical images were photographs (38.0%), diagrams (19.1%), and scanned documents (14.4%). MDs uploaded 67.9% of clinical images, followed by RNs with 10.2%, and genetic counselors with 6.8%. Dermatology (34.9%), ophthalmology (16.1%), and general surgery (10.8%) uploaded the most clinical images. The content of clinical images referencing body parts varied, with 49.8% of those images focusing on the head and neck region, 15.3% focusing on the thorax, and 13.8% focusing on the lower extremities. The diversity of image types, content, and uploaders within a home grown EHR system reflected the versatility and importance of the image uploading tool. Understanding how users utilize image uploading tools in a clinical setting highlights important considerations for designing better EHR tools and the importance of interoperability between EHR systems and other health technology.

  13. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  14. Assessment of Machine Learning Algorithms for Automatic Benthic Cover Monitoring and Mapping Using Towed Underwater Video Camera and High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Hassan Mohamed

    2018-05-01

    Full Text Available Benthic habitat monitoring is essential for many applications involving biodiversity, marine resource management, and the estimation of variations over temporal and spatial scales. Nevertheless, both automatic and semi-automatic analytical methods for deriving ecologically significant information from towed camera images are still limited. This study proposes a methodology that enables a high-resolution towed camera with a Global Navigation Satellite System (GNSS to adaptively monitor and map benthic habitats. First, the towed camera finishes a pre-programmed initial survey to collect benthic habitat videos, which can then be converted to geo-located benthic habitat images. Second, an expert labels a number of benthic habitat images to class habitats manually. Third, attributes for categorizing these images are extracted automatically using the Bag of Features (BOF algorithm. Fourth, benthic cover categories are detected automatically using Weighted Majority Voting (WMV ensembles for Support Vector Machines (SVM, K-Nearest Neighbor (K-NN, and Bagging (BAG classifiers. Fifth, WMV-trained ensembles can be used for categorizing more benthic cover images automatically. Finally, correctly categorized geo-located images can provide ground truth samples for benthic cover mapping using high-resolution satellite imagery. The proposed methodology was tested over Shiraho, Ishigaki Island, Japan, a heterogeneous coastal area. The WMV ensemble exhibited 89% overall accuracy for categorizing corals, sediments, seagrass, and algae species. Furthermore, the same WMV ensemble produced a benthic cover map using a Quickbird satellite image with 92.7% overall accuracy.

  15. A New Colorimetrically-Calibrated Automated Video-Imaging Protocol for Day-Night Fish Counting at the OBSEA Coastal Cabled Observatory

    Directory of Open Access Journals (Sweden)

    Joaquín del Río

    2013-10-01

    Full Text Available Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals’ visual counts per unit of time is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI, represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented “3D Thin-Plate Spline” warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6% out of 908 as a total corresponding to 18 days (at 30 min frequency. The Roberts operator (used in image processing and computer vision for edge detection was used to highlights regions of high spatial colour gradient corresponding to fishes’ bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were

  16. A new colorimetrically-calibrated automated video-imaging protocol for day-night fish counting at the OBSEA coastal cabled observatory.

    Science.gov (United States)

    del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni

    2013-10-30

    Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented "3D Thin-Plate Spline" warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results

  17. A comparison of the quality of image acquisition between the incident dark field and sidestream dark field video-microscopes

    NARCIS (Netherlands)

    E. Gilbert-Kawai; J. Coppel (Jonny); V. Bountziouka (Vassiliki); C. Ince (Can); D. Martin (Daniel)

    2016-01-01

    markdownabstract__Background__ The ‘Cytocam’ is a third generation video-microscope, which enables real time visualisation of the in vivo microcirculation. Based upon the principle of incident dark field (IDF) illumination, this hand held computer-controlled device was designed to address the

  18. Low-complexity wavelet-based image/video coding for home-use and remote surveillance

    NARCIS (Netherlands)

    Loomans, M.J.H.; Koeleman, C.J.; Joosen, K.M.J.; With, de P.H.N.

    2011-01-01

    The availability of inexpensive cameras enables alternative applications beyond personal video communication. For example, surveillance of rooms and home premises is such an alternative application, which can be extended with remote viewing on hand-held battery-powered consumer devices. Scalable

  19. A comparison of the quality of image acquisition between the incident dark field and sidestream dark field video-microscopes

    NARCIS (Netherlands)

    Gilbert-Kawai, Edward; Coppel, Jonny; Bountziouka, Vassiliki; Ince, Can; Martin, Daniel; Ahuja, V.; Aref-Adib, G.; Burnham, R.; Chisholm, A.; Clarke, K.; Coates, D.; Coates, M.; Cook, D.; Cox, M.; Dhillon, S.; Dougall, C.; Doyle, P.; Duncan, P.; Edsell, M.; Edwards, L.; Evans, L.; Gardiner, P.; Grocott, M.; Gunning, P.; Hart, N.; Harrington, J.; Harvey, J.; Holloway, C.; Howard, D.; Hurlbut, D.; Imray, C.; Jonas, M.; van der Kaaij, J.; Khosravi, M.; Kolfschoten, N.; Levett, D.; Luery, H.; Luks, A.; Martin, D.; McMorrow, R.; Meale, P.; Mitchell, K.; Montgomery, H.; Morgan, G.; Morgan, J.; Murray, A.; Mythen, M.; Newman, S.; O'Dwyer, M.; Pate, J.

    2016-01-01

    Background: The 'Cytocam' is a third generation video-microscope, which enables real time visualisation of the in vivo microcirculation. Based upon the principle of incident dark field (IDF) illumination, this hand held computer-controlled device was designed to address the technical limitations of

  20. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.