WorldWideScience

Sample records for video images recorded

  1. Usefulness of video images from a X-ray simulator in recordings of the treatment portal of pulmonary lesion

    International Nuclear Information System (INIS)

    Nishioka, Masayuki; Sakurai, Makoto; Fujioka, Tomio; Fukuoka, Masahiro; Kusunoki, Yoko; Nakajima, Toshifumi; Onoyama, Yasuto.

    1992-01-01

    Movement of the target volume should be taken into consideration in treatment planning. Respiratory movement is the greatest motion in radiotherapy for the pulmonary lesion. We combined video with a X-ray simulator to record movement. Of 50 patients whose images were recorded, respiratory movements of 0 to 4 mm, of 5 to 9 mm, and of more than 10 mm were observed in 13, 21, and 16 patients, respectively. Discrepancies of 5 to 9 mm and of more than 10 mm between simulator films and video images were observed in 14 and 13 patients, respectively. These results show that video images are useful in recording the movement while considering respiratory motion. We recommend that video system added to a X-ray simulator is used for treatment planning, especially in radiotherapy for the pulmonary lesion. (author)

  2. Dedicated data recording video system for Spacelab experiments

    Science.gov (United States)

    Fukuda, Toshiyuki; Tanaka, Shoji; Fujiwara, Shinji; Onozuka, Kuniharu

    1984-04-01

    A feasibility study of video tape recorder (VTR) modification to add the capability of data recording etc. was conducted. This system is an on-broad system to support Spacelab experiments as a dedicated video system and a dedicated data recording system to operate independently of the normal operation of the Orbiter, Spacelab and the other experiments. It continuously records the video image signals with the acquired data, status and operator's voice at the same time on one cassette video tape. Such things, the crews' actions, animals' behavior, microscopic views and melting materials in furnace, etc. are recorded. So, it is expected that experimenters can make a very easy and convenient analysis of the synchronized video, voice and data signals in their post flight analysis.

  3. High-resolution X-ray television and high-resolution video recorders

    International Nuclear Information System (INIS)

    Haendle, J.; Horbaschek, H.; Alexandrescu, M.

    1977-01-01

    The improved transmission properties of the high-resolution X-ray television chain described here make it possible to transmit more information per television image. The resolution in the fluoroscopic image, which is visually determined, depends on the dose rate and the inertia of the television pick-up tube. This connection is discussed. In the last few years, video recorders have been increasingly used in X-ray diagnostics. The video recorder is a further quality-limiting element in X-ray television. The development of function patterns of high-resolution magnetic video recorders shows that this quality drop may be largely overcome. The influence of electrical band width and number of lines on the resolution in the X-ray television image stored is explained in more detail. (orig.) [de

  4. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  5. Data compression systems for home-use digital video recording

    NARCIS (Netherlands)

    With, de P.H.N.; Breeuwer, M.; van Grinsven, P.A.M.

    1992-01-01

    The authors focus on image data compression techniques for digital recording. Image coding for storage equipment covers a large variety of systems because the applications differ considerably in nature. Video coding systems suitable for digital TV and HDTV recording and digital electronic still

  6. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  7. Video stereopsis of cardiac MR images

    International Nuclear Information System (INIS)

    Johnson, R.F. Jr.; Norman, C.

    1988-01-01

    This paper describes MR images of the heart acquired using a spin-echo technique synchronized to the electrocardiogram. Sixteen 0.5-cm-thick sections with a 0.1-cm gap between each section were acquired in the coronal view to cover all the cardiac anatomy including vasculature. Two sets of images were obtained with a subject rotation corresponding to the stereoscopic viewing angle of the eyes. The images were digitized, spatially registered, and processed by a three-dimensional graphics work station for stereoscopic viewing. Video recordings were made of each set of images and then temporally synchronized to produce a single video image corresponding to the appropriate eye view

  8. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  9. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    2000-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed, higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  10. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    1991-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  11. Digital video recording and archiving in ophthalmic surgery

    Directory of Open Access Journals (Sweden)

    Raju Biju

    2006-01-01

    Full Text Available Currently most ophthalmic operating rooms are equipped with an analog video recording system [analog Charge Couple Device camera for video grabbing and a Video Cassette Recorder for recording]. We discuss the various advantages of a digital video capture device, its archiving capabilities and our experience during the transition from analog to digital video recording and archiving. The basic terminology and concepts related to analog and digital video, along with the choice of hardware, software and formats for archiving are discussed.

  12. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  13. Video Toroid Cavity Imager

    Energy Technology Data Exchange (ETDEWEB)

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  14. Guided filtering for solar image/video processing

    Directory of Open Access Journals (Sweden)

    Long Xu

    2017-06-01

    Full Text Available A new image enhancement algorithm employing guided filtering is proposed in this work for enhancement of solar images and videos, so that users can easily figure out important fine structures imbedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determination of interesting solar burst activities from recorded images/movies.

  15. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  16. Implications of the law on video recording in clinical practice.

    Science.gov (United States)

    Henken, Kirsten R; Jansen, Frank Willem; Klein, Jan; Stassen, Laurents P S; Dankelman, Jenny; van den Dobbelsteen, John J

    2012-10-01

    Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear understanding of the legal framework is lacking. Therefore, this research aims to provide insight into the juridical position of patients and professionals regarding video recording in health care practice. Jurisprudence was searched to exemplify legislation on video recording in health care. In addition, legislation was translated for different applications of video in health care found in the literature. Three principles in Western law are relevant for video recording in health care practice: (1) regulations on privacy regarding personal data, which apply to the gathering and processing of video data in health care settings; (2) the patient record, in which video data can be stored; and (3) professional secrecy, which protects the privacy of patients including video data. Practical implementation of these principles in video recording in health care does not exist. Practical regulations on video recording in health care for different specifically defined purposes are needed. Innovations in video capture technology that enable video data to be made anonymous automatically can contribute to protection for the privacy of all the people involved.

  17. Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings

    Science.gov (United States)

    Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas

    2011-01-01

    Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.

  18. Biased lineup instructions and face identification from video images.

    Science.gov (United States)

    Thompson, W Burt; Johnson, Jaime

    2008-01-01

    Previous eyewitness memory research has shown that biased lineup instructions reduce identification accuracy, primarily by increasing false-positive identifications in target-absent lineups. Because some attempts at identification do not rely on a witness's memory of the perpetrator but instead involve matching photos to images on surveillance video, the authors investigated the effects of biased instructions on identification accuracy in a matching task. In Experiment 1, biased instructions did not affect the overall accuracy of participants who used video images as an identification aid, but nearly all correct decisions occurred with target-present photo spreads. Both biased and unbiased instructions resulted in high false-positive rates. In Experiment 2, which focused on video-photo matching accuracy with target-absent photo spreads, unbiased instructions led to more correct responses (i.e., fewer false positives). These findings suggest that investigators should not relax precautions against biased instructions when people attempt to match photos to an unfamiliar person recorded on video.

  19. Clients experience of video recordings of their psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Jensen, Karen Boelt; Madsen, Ninna Skov

    the current relatively widespread use video one finds only a very limited numbers empirical study of how these recordings is experienced by the clients. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents a qualitative, explorative study of clients’ experiences......Background: Due to the development of technologies and the low costs video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  20. 3D reconstruction of cystoscopy videos for comprehensive bladder records.

    Science.gov (United States)

    Lurie, Kristen L; Angst, Roland; Zlatev, Dimitar V; Liao, Joseph C; Ellerbee Bowden, Audrey K

    2017-04-01

    White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize a 3D model of organs from an endoscopic video that captures the shape and surface appearance of the organ. A key aspect of our strategy is the use of advanced computer vision techniques and unmodified, clinical-grade endoscopy hardware with few constraints on the image acquisition protocol, which presents a low barrier to clinical translation. We validate the accuracy and robustness of our reconstruction and co-registration method using cystoscopy videos from tissue-mimicking bladder phantoms and show clinical utility during cystoscopy in the operating room for bladder cancer evaluation. As our method can powerfully augment the visual medical record of the appearance of internal organs, it is broadly applicable to endoscopy and represents a significant advance in cancer surveillance opportunities for big-data cancer research.

  1. Video retrieval by still-image analysis with ImageMiner

    Science.gov (United States)

    Kreyss, Jutta; Roeper, M.; Alshuth, Peter; Hermes, Thorsten; Herzog, Otthein

    1997-01-01

    The large amount of available multimedia information (e.g. videos, audio, images) requires efficient and effective annotation and retrieval methods. As videos start playing a more important role in the frame of multimedia, we want to make these available for content-based retrieval. The ImageMiner-System, which was developed at the University of Bremen in the AI group, is designed for content-based retrieval of single images by a new combination of techniques and methods from computer vision and artificial intelligence. In our approach to make videos available for retrieval in a large database of videos and images there are two necessary steps: First, the detection and extraction of shots from a video, which is done by a histogram based method and second, the construction of the separate frames in a shot to one still single images. This is performed by a mosaicing-technique. The resulting mosaiced image gives a one image visualization of the shot and can be analyzed by the ImageMiner-System. ImageMiner has been tested on several domains, (e.g. landscape images, technical drawings), which cover a wide range of applications.

  2. Selection and evaluation of video tape recorders for surveillance applications

    International Nuclear Information System (INIS)

    Martinez, R.L.

    1988-01-01

    Unattended surveillance places unique requirements on video recorders. One such requireemnt, extended operational reliability, often cannot be determined from the manufacturers' data. Subsequent to market surveys and preliminary testing, the Sony 8mm EVO-210 recorder was selected for use in the Modular Integrated Video System (MIVS), while concurrently undergoing extensive reliability testing. A microprocessor based controller was developed to life test and evaluate the performance of the video cassette recorders. The controller has the capability to insert a unique binary count in the vertical interval of the recorder video signal for each scene. This feature allows for automatic verification of the recorded data using a MIVS Review Station. Initially, twenty recorders were subjected to the accelerated lift test, which involves recording one scene (eight video frames) every 15 seconds. The recorders were operated in the exact manner in which they are utilized in the MIVS. This paper describes the results of the preliminary testing, accelerated life test and the extensive testing on 130 Sony EVO-210 recorders

  3. Video dosimetry: evaluation of X-radiation dose by video fluoroscopic image

    International Nuclear Information System (INIS)

    Nova, Joao Luiz Leocadio da; Lopes, Ricardo Tadeu

    1996-01-01

    A new methodology to evaluate the entrance surface dose on patients under radiodiagnosis is presented. A phantom is used in video fluoroscopic procedures in on line video signal system. The images are obtained from a Siemens Polymat 50 and are digitalized. The results show that the entrance surface dose can be obtained in real time from video imaging

  4. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  5. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  6. Potential usefulness of a video printer for producing secondary images from digitized chest radiographs

    Science.gov (United States)

    Nishikawa, Robert M.; MacMahon, Heber; Doi, Kunio; Bosworth, Eric

    1991-05-01

    Communication between radiologists and clinicians could be improved if a secondary image (copy of the original image) accompanied the radiologic report. In addition, the number of lost original radiographs could be decreased, since clinicians would have less need to borrow films. The secondary image should be simple and inexpensive to produce, while providing sufficient image quality for verification of the diagnosis. We are investigating the potential usefulness of a video printer for producing copies of radiographs, i.e. images printed on thermal paper. The video printer we examined (Seikosha model VP-3500) can provide 64 shades of gray. It is capable of recording images up to 1,280 pixels by 1,240 lines and can accept any raster-type video signal. The video printer was characterized in terms of its linearity, contrast, latitude, resolution, and noise properties. The quality of video-printer images was also evaluated in an observer study using portable chest radiographs. We found that observers could confirm up to 90 of the reported findings in the thorax using video- printer images, when the original radiographs were of high quality. The number of verified findings was diminished when high spatial resolution was required (e.g. detection of a subtle pneumothorax) or when a low-contrast finding was located in the mediastinal area or below the diaphragm (e.g. nasogastric tubes).

  7. Analysis of two dimensional charged particle scintillation using video image processing techniques

    International Nuclear Information System (INIS)

    Sinha, A.; Bhave, B.D.; Singh, B.; Panchal, C.G.; Joshi, V.M.; Shyam, A.; Srinivasan, M.

    1993-01-01

    A novel method for video recording of individual charged particle scintillation images and their offline analysis using digital image processing techniques for obtaining position, time and energy information is presented . Results of an exploratory experiment conducted using 241 Am and 239 Pu alpha sources are presented. (author). 3 figs., 4 tabs

  8. Markerless registration for image guided surgery. Preoperative image, intraoperative video image, and patient

    International Nuclear Information System (INIS)

    Kihara, Tomohiko; Tanaka, Yuko

    1998-01-01

    Real-time and volumetric acquisition of X-ray CT, MR, and SPECT is the latest trend of the medical imaging devices. A clinical challenge is to use these multi-modality volumetric information complementary on patient in the entire diagnostic and surgical processes. The intraoperative image and patient integration intents to establish a common reference frame by image in diagnostic and surgical processes. This provides a quantitative measure during surgery, for which we have been relied mostly on doctors' skills and experiences. The intraoperative image and patient integration involves various technologies, however, we think one of the most important elements is the development of markerless registration, which should be efficient and applicable to the preoperative multi-modality data sets, intraoperative image, and patient. We developed a registration system which integrates preoperative multi-modality images, intraoperative video image, and patient. It consists of a real-time registration of video camera for intraoperative use, a markerless surface sampling matching of patient and image, our previous works of markerless multi-modality image registration of X-ray CT, MR, and SPECT, and an image synthesis on video image. We think these techniques can be used in many applications which involve video camera like devices such as video camera, microscope, and image Intensifier. (author)

  9. Multiple Generations on Video Tape Recorders.

    Science.gov (United States)

    Wiens, Jacob H.

    Helical scan video tape recorders were tested for their dubbing characteristics in order to make selection data available to media personnel. The equipment, two recorders of each type tested, was submitted by the manufacturers. The test was designed to produce quality evaluations for three generations of a single tape, thereby encompassing all…

  10. Localizing wushu players on a platform based on a video recording

    Science.gov (United States)

    Peczek, Piotr M.; Zabołotny, Wojciech M.

    2017-08-01

    This article describes the development of a method to localize an athlete during sports performance on a platform, based on a static video recording. Considered sport for this method is wushu - martial art. However, any other discipline can be applied. There are specified requirements, and 2 algorithms of image processing are described. The next part presents an experiment that was held based on recordings from the Pan American Wushu Championship. Based on those recordings the steps of the algorithm are shown. Results are evaluated manually. The last part of the article concludes if the algorithm is applicable and what improvements have to be implemented to use it during sports competitions as well as for offline analysis.

  11. Mass-storage management for distributed image/video archives

    Science.gov (United States)

    Franchi, Santina; Guarda, Roberto; Prampolini, Franco

    1993-04-01

    The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.

  12. Implications of the law on video recording in clinical practice

    OpenAIRE

    Henken, Kirsten R.; Jansen, Frank-Willem; Klein, Jan; Stassen, Laurents; Dankelman, Jenny; Dobbelsteen, John

    2012-01-01

    textabstractBackground: Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear understanding of the legal framework is lacking. Therefore, this research aims to provide insight into the juridical position of patients and professionals regarding video recording in health ...

  13. Video Recordings in Public Libraries.

    Science.gov (United States)

    Doyle, Stephen

    1984-01-01

    Reports on development and operation of public library collection of video recordings, describes results of user survey conducted over 6-month period, and offers brief guidelines. Potential users, censorship and copyright, organization of collection, fees, damage and loss, funding, purchasing and promotion, formats, processing and cataloging,…

  14. Live lecture versus video-recorded lecture: are students voting with their feet?

    Science.gov (United States)

    Cardall, Scott; Krupat, Edward; Ulrich, Michael

    2008-12-01

    In light of educators' concerns that lecture attendance in medical school has declined, the authors sought to assess students' perceptions, evaluations, and motivations concerning live lectures compared with accelerated, video-recorded lectures viewed online. The authors performed a cross-sectional survey study of all first- and second-year students at Harvard Medical School. Respondents answered questions regarding their lecture attendance; use of class and personal time; use of accelerated, video-recorded lectures; and reasons for viewing video-recorded and live lectures. Other questions asked students to compare how well live and video-recorded lectures satisfied learning goals. Of the 353 students who received questionnaires, 204 (58%) returned responses. Collectively, students indicated watching 57.2% of lectures live, 29.4% recorded, and 3.8% using both methods. All students have watched recorded lectures, and most (88.5%) have used video-accelerating technologies. When using accelerated, video-recorded lecture as opposed to attending lecture, students felt they were more likely to increase their speed of knowledge acquisition (79.3% of students), look up additional information (67.7%), stay focused (64.8%), and learn more (63.7%). Live attendance remains the predominant method for viewing lectures. However, students find accelerated, video-recorded lectures equally or more valuable. Although educators may be uncomfortable with the fundamental change in the learning process represented by video-recorded lecture use, students' responses indicate that their decisions to attend lectures or view recorded lectures are motivated primarily by a desire to satisfy their professional goals. A challenge remains for educators to incorporate technologies students find useful while creating an interactive learning culture.

  15. Super VHS video cassette recorder, A-SB88; Super VHS video A-SB88

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    A super VHS video cassette recorder, A-SB88, was commercialized having no compromised aspects at all in picture quality, sound quality, operability, energy conservation, design, etc. In the picture quality, the VCR is mounted with the S-ET system capable of realizing a quality comparable to SVHS with a three-dimensional Y/C detached circuit for dynamic moving image detection, three-dimensional DNR(digital noise reduction) and TBC(time base corrector), FE(flying erase) circuit, and a normal tape. In the operability, it is provided with a remote control transfer in large LCD, 400x high speed rewind, reservation system capable of simply reserving a serial drama for example, and a function for searching the end of picture recording; also, in the environmental aspect, the stand-by power consumption was reduced to 1/10 of conventional models (ratio with Toshiba A-BS6 at display power off). (translated by NEDO)

  16. Content-based retrieval in videos from laparoscopic surgery

    Science.gov (United States)

    Schoeffmann, Klaus; Beecks, Christian; Lux, Mathias; Uysal, Merih Seran; Seidl, Thomas

    2016-03-01

    In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures for long-term archival. These endoscopic videos are a good source of information for explanations to patients and follow-up operations. As the endoscope is the "eye of the surgeon", the video shows the same information the surgeon has seen during the operation, and can describe the situation inside the patient much more precisely than an operation report would do. Recorded endoscopic videos can also be used for training young surgeons and in some countries the long-term archival of video recordings from endoscopic procedures is even enforced by law. A major challenge, however, is to efficiently access these very large video archives for later purposes. One problem, for example, is to locate specific images in the videos that show important situations, which are additionally captured as static images during the procedure. This work addresses this problem and focuses on contentbased video retrieval in data from laparoscopic surgery. We propose to use feature signatures, which can appropriately and concisely describe the content of laparoscopic images, and show that by using this content descriptor with an appropriate metric, we are able to efficiently perform content-based retrieval in laparoscopic videos. In a dataset with 600 captured static images from 33 hours recordings, we are able to find the correct video segment for more than 88% of these images.

  17. Video on the Internet: An introduction to the digital encoding, compression, and transmission of moving image data.

    Science.gov (United States)

    Boudier, T; Shotton, D M

    1999-01-01

    In this paper, we seek to provide an introduction to the fast-moving field of digital video on the Internet, from the viewpoint of the biological microscopist who might wish to store or access videos, for instance in image databases such as the BioImage Database (http://www.bioimage.org). We describe and evaluate the principal methods used for encoding and compressing moving image data for digital storage and transmission over the Internet, which involve compromises between compression efficiency and retention of image fidelity, and describe the existing alternate software technologies for downloading or streaming compressed digitized videos using a Web browser. We report the results of experiments on video microscopy recordings and three-dimensional confocal animations of biological specimens to evaluate the compression efficiencies of the principal video compression-decompression algorithms (codecs) and to document the artefacts associated with each of them. Because MPEG-1 gives very high compression while yet retaining reasonable image quality, these studies lead us to recommend that video databases should store both a high-resolution original version of each video, ideally either uncompressed or losslessly compressed, and a separate edited and highly compressed MPEG-1 preview version that can be rapidly downloaded for interactive viewing by the database user. Copyright 1999 Academic Press.

  18. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    Science.gov (United States)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  19. Video event data recording of a taxi driver used for diagnosis of epilepsy

    Directory of Open Access Journals (Sweden)

    Kotaro Sakurai

    2014-01-01

    Full Text Available A video event data recorder (VEDR in a motor vehicle records images before and after a traffic accident. This report describes a taxi driver whose seizures were recorded by VEDR, which was extremely useful for the diagnosis of epilepsy. The patient was a 63-year-old right-handed Japanese male taxi driver. He collided with a streetlight. Two years prior to this incident, he raced an engine for a long time while parked. The VEDR enabled confirmation that the accidents depended on an epileptic seizure and he was diagnosed with symptomatic localization-related epilepsy. The VEDR is useful not only for traffic accident evidence; it might also contribute to a driver's health care and road safety.

  20. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  1. The influence of video recordings on beginning therapists’ learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  2. Recorded peer video chat as a research and development tool

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Cowie, Bronwen

    2016-01-01

    When practising teachers take time to exchange their experiences and reflect on their teaching realities as critical friends, they add meaning and depth to educational research. When peer talk is facilitated through video chat platforms, teachers can meet (virtually) face to face even when...... recordings were transcribed and used to prompt further discussion. The recording of the video chat meetings provided an opportunity for researchers to listen in and follow up on points they felt needed further unpacking or clarification. The recorded peer video chat conversations provided an additional...... opportunity to stimulate and support teacher participants in a process of critical analysis and reflection on practice. The discussions themselves were empowering because in the absence of the researcher, the teachers, in negotiation with peers, choose what is important enough to them to take time to discuss....

  3. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  4. Implications of the law on video recording in clinical practice

    NARCIS (Netherlands)

    K.R. Henken (Kirsten R.); F-W. Jansen (Frank-Willem); J. Klein (Jan); L.P. Stassen (Laurents); J. Dankelman (Jenny); J.J. van den Dobbelsteen (John)

    2012-01-01

    textabstractBackground: Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A

  5. Implications of the law on video recording in clinical practice

    NARCIS (Netherlands)

    Henken, K.R.; Jansen, F.W.; Klein, J.; Stassen, L.P.S.; Dankelman, J.; Van den Dobbelsteen, J.J.

    2012-01-01

    Background Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear

  6. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  7. High-speed video capillaroscopy method for imaging and evaluation of moving red blood cells

    Science.gov (United States)

    Gurov, Igor; Volkov, Mikhail; Margaryants, Nikita; Pimenov, Aleksei; Potemkin, Andrey

    2018-05-01

    The video capillaroscopy system with high image recording rate to resolve moving red blood cells with velocity up to 5 mm/s into a capillary is considered. Proposed procedures of the recorded video sequence processing allow evaluating spatial capillary area, capillary diameter and central line with high accuracy and reliability independently on properties of individual capillary. Two-dimensional inter frame procedure is applied to find lateral shift of neighbor images in the blood flow area with moving red blood cells and to measure directly the blood flow velocity along a capillary central line. The developed method opens new opportunities for biomedical diagnostics, particularly, due to long-time continuous monitoring of red blood cells velocity into capillary. Spatio-temporal representation of capillary blood flow is considered. Experimental results of direct measurement of blood flow velocity into separate capillary as well as capillary net are presented and discussed.

  8. Diagnostic image quality of video-digitized chest images

    International Nuclear Information System (INIS)

    Winter, L.H.; Butler, R.B.; Becking, W.B.; Warnars, G.A.O.; Haar Romeny, B. ter; Ottes, F.P.; Valk, J.-P.J. de

    1989-01-01

    The diagnostic accuracy obtained with the Philips picture archiving and communications subsystem was investigated by means of an observer performance study using receiver operating characteristic (ROC) analysis. The image qualities of conventional films and video digitized images were compared. The scanner had a 1024 x 1024 x 8 bit memory. The digitized images were displayed on a 60 Hz interlaced display monitor 1024 lines. Posteroanterior (AP) roetgenograms of a chest phantom with superimposed simulated interstitial pattern disease (IPD) were produced; there were 28 normal and 40 abnormal films. Normal films were produced by the chest phantom alone. Abnormal films were taken of the chest phantom with varying degrees of superimposed simulated intersitial disease (PND) for an observer performance study, because the results of a simulated interstitial pattern disease study are less likely to be influenced by perceptual capabilities. The conventional films and the video digitized images were viewed by five experienced observers during four separate sessions. Conventional films were presented on a viewing box, the digital images were displayed on the monitor described above. The presence of simulated intersitial disease was indicated on a 5-point ROC certainty scale by each observer. We analyzed the differences between ROC curves derived from correlated data statistically. The mean time required to evaluate 68 digitized images is approximately four times the mean time needed to read the convential films. The diagnostic quality of the video digitized images was significantly lower (at the 5% level) than that of the conventional films (median area under the curve (AUC) of 0.71 and 0.94, respectively). (author). 25 refs.; 2 figs.; 4 tabs

  9. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  10. Mobile, portable lightweight wireless video recording solutions for homeland security, defense, and law enforcement applications

    Science.gov (United States)

    Sandy, Matt; Goldburt, Tim; Carapezza, Edward M.

    2015-05-01

    It is desirable for executive officers of law enforcement agencies and other executive officers in homeland security and defense, as well as first responders, to have some basic information about the latest trend on mobile, portable lightweight wireless video recording solutions available on the market. This paper reviews and discusses a number of studies on the use and effectiveness of wireless video recording solutions. It provides insights into the features of wearable video recording devices that offer excellent applications for the category of security agencies listed in this paper. It also provides answers to key questions such as: how to determine the type of video recording solutions most suitable for the needs of your agency, the essential features to look for when selecting a device for your video needs, and the privacy issues involved with wearable video recording devices.

  11. THE DETERMINATION OF THE SHARPNESS DEPTH BORDERS AND CORRESPONDING PHOTOGRAPHY AND VIDEO RECORDING PARAMETERS FOR CONTEMPORARY VIDEO TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    E. G. Zaytseva

    2011-01-01

    Full Text Available The method of determination of the sharpness depth borders was improved for contemporary video technology. The computer programme for determination of corresponding video recording parameters was created.

  12. Seizure semiology inferred from clinical descriptions and from video recordings. How accurate are they?

    DEFF Research Database (Denmark)

    Beniczky, Simona Alexandra; Fogarasi, András; Neufeld, Miri

    2012-01-01

    To assess how accurate the interpretation of seizure semiology is when inferred from witnessed seizure descriptions and from video recordings, five epileptologists analyzed 41 seizures from 30 consecutive patients who had clinical episodes in the epilepsy monitoring unit. For each clinical episode...... for the descriptions (k=0.67) and almost perfect for the video recordings (k=0.95). Video recordings significantly increase the accuracy of seizure interpretation....

  13. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  14. Extended image differencing for change detection in UAV video mosaics

    Science.gov (United States)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  15. An introduction to video image compression and authentication technology for safeguards applications

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1995-01-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970's. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images

  16. Video-based noncooperative iris image segmentation.

    Science.gov (United States)

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  17. Does Wearable Medical Technology With Video Recording Capability Add Value to On-Call Surgical Evaluations?

    Science.gov (United States)

    Gupta, Sameer; Boehme, Jacqueline; Manser, Kelly; Dewar, Jannine; Miller, Amie; Siddiqui, Gina; Schwaitzberg, Steven D

    2016-10-01

    Background Google Glass has been used in a variety of medical settings with promising results. We explored the use and potential value of an asynchronous, near-real time protocol-which avoids transmission issues associated with real-time applications-for recording, uploading, and viewing of high-definition (HD) visual media in the emergency department (ED) to facilitate remote surgical consults. Study Design First-responder physician assistants captured pertinent aspects of the physical examination and diagnostic imaging using Google Glass' HD video or high-resolution photographs. This visual media were then securely uploaded to the study website. The surgical consultation then proceeded over the phone in the usual fashion and a clinical decision was made. The surgeon then accessed the study website to review the uploaded video. This was followed by a questionnaire regarding how the additional data impacted the consultation. Results The management plan changed in 24% (11) of cases after surgeons viewed the video. Five of these plans involved decision making regarding operative intervention. Although surgeons were generally confident in their initial management plan, confidence scores increased further in 44% (20) of cases. In addition, we surveyed 276 ED patients on their opinions regarding concerning the practice of health care providers wearing and using recording devices in the ED. The survey results revealed that the majority of patients are amenable to the addition of wearable technology with video functionality to their care. Conclusions This study demonstrates the potential value of a medically dedicated, hands-free, HD recording device with internet connectivity in facilitating remote surgical consultation. © The Author(s) 2016.

  18. The influence of video recordings on beginning therapist’s learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    2010-01-01

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  19. An experimental digital consumer recorder for MPEG-coded video signals

    NARCIS (Netherlands)

    Saeijs, R.W.J.J.; With, de P.H.N.; Rijckaert, A.M.A.; Wong, C.

    1995-01-01

    The concept and real-time implementation of an experimental home-use digital recorder is presented, capable of recording MPEG-compressed video signals. The system has small recording mechanics based on the DVC standard and it uses MPEG compression for trick-mode signals as well

  20. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    Science.gov (United States)

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  1. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    Science.gov (United States)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  2. Evaluating Student Self-Assessment through Video-Recorded Patient Simulations.

    Science.gov (United States)

    Sanderson, Tammy R; Kearney, Rachel C; Kissell, Denise; Salisbury, Jessica

    2016-08-01

    The purpose of this pilot study was to determine if the use of a video-recorded clinical session affects the accuracy of dental hygiene student self-assessment and dental hygiene instructor feedback. A repeated measures experiment was conducted. The use of the ODU 11/12 explorer was taught to students and participating faculty through video and demonstration. Students then demonstrated activation of the explorer on a student partner using the same technique. While faculty completed the student assessment in real time, the sessions were video recorded. After completing the activation of the explorer, students and faculty completed an assessment of the student's performance using a rubric. A week later, both students and faculty viewed the video of the clinical skill performance and reassessed the student's performance using the same rubric. The student videos were randomly assigned a number, so faculty reassessed the performance without access to the student's identity or the score that was initially given. Twenty-eight students and 4 pre-clinical faculty completed the study. Students' average score was 4.68±1.16 on the first assessment and slightly higher 4.89±1.45 when reviewed by video. Faculty average scores were 5.07±2.13 at the first assessment and 4.79±2.54 on the second assessment with the video. No significant differences were found between the differences in overall scores, there was a significant difference in the scores of the grading criteria compared to the expert assessment scores (p=0.0001). This pilot study shows that calibration and assessment without bias in education is a challenge. Analyzing and incorporating new techniques can result in more exact assessment of student performance and self-assessment. Copyright © 2016 The American Dental Hygienists’ Association.

  3. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  4. How to implement live video recording in the clinical environment: A practical guide for clinical services.

    Science.gov (United States)

    Lloyd, Adam; Dewar, Alistair; Edgar, Simon; Caesar, Dave; Gowens, Paul; Clegg, Gareth

    2017-06-01

    The use of video in healthcare is becoming more common, particularly in simulation and educational settings. However, video recording live episodes of clinical care is far less routine. To provide a practical guide for clinical services to embed live video recording. Using Kotter's 8-step process for leading change, we provide a 'how to' guide to navigate the challenges required to implement a continuous video-audit system based on our experience of video recording in our emergency department resuscitation rooms. The most significant hurdles in installing continuous video audit in a busy clinical area involve change management rather than equipment. Clinicians are faced with considerable ethical, legal and data protection challenges which are the primary barriers for services that pursue video recording of patient care. Existing accounts of video use rarely acknowledge the organisational and cultural dimensions that are key to the success of establishing a video system. This article outlines core implementation issues that need to be addressed if video is to become part of routine care delivery. By focussing on issues such as staff acceptability, departmental culture and organisational readiness, we provide a roadmap that can be pragmatically adapted by all clinical environments, locally and internationally, that seek to utilise video recording as an approach to improving clinical care. © 2017 John Wiley & Sons Ltd.

  5. Large-Scale Query-by-Image Video Retrieval Using Bloom Filters

    OpenAIRE

    Araujo, Andre; Chaves, Jason; Lakshman, Haricharan; Angst, Roland; Girod, Bernd

    2016-01-01

    We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to ...

  6. Video Recording With a GoPro in Hand and Upper Extremity Surgery.

    Science.gov (United States)

    Vara, Alexander D; Wu, John; Shin, Alexander Y; Sobol, Gregory; Wiater, Brett

    2016-10-01

    Video recordings of surgical procedures are an excellent tool for presentations, analyzing self-performance, illustrating publications, and educating surgeons and patients. Recording the surgeon's perspective with high-resolution video in the operating room or clinic has become readily available and advances in software improve the ease of editing these videos. A GoPro HERO 4 Silver or Black was mounted on a head strap and worn over the surgical scrub cap, above the loupes of the operating surgeon. Five live surgical cases were recorded with the camera. The videos were uploaded to a computer and subsequently edited with iMovie or the GoPro software. The optimal settings for both the Silver and Black editions, when operating room lights are used, were determined to be a narrow view, 1080p, 60 frames per second (fps), spot meter on, protune on with auto white balance, exposure compensation at -0.5, and without a polarizing lens. When the operating room lights were not used, it was determined that the standard settings for a GoPro camera were ideal for positioning and editing (4K, 15 frames per second, spot meter and protune off). The GoPro HERO 4 provides high-quality, the surgeon perspective, and a cost-effective video recording of upper extremity surgical procedures. Challenges include finding the optimal settings for each surgical procedure and the length of recording due to battery life limitations. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  7. PVR system design of advanced video navigation reinforced with audible sound

    NARCIS (Netherlands)

    Eerenberg, O.; Aarts, R.; De With, P.N.

    2014-01-01

    This paper presents an advanced video navigation concept for Personal Video Recording (PVR), based on jointly using the primary image and a Picture-in-Picture (PiP) image, featuring combined rendering of normal-play video fragments with audio and fast-search video. The hindering loss of audio during

  8. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  9. Cell Phone Video Recording Feature as a Language Learning Tool: A Case Study

    Science.gov (United States)

    Gromik, Nicolas A.

    2012-01-01

    This paper reports on a case study conducted at a Japanese national university. Nine participants used the video recording feature on their cell phones to produce weekly video productions. The task required that participants produce one 30-second video on a teacher-selected topic. Observations revealed the process of video creation with a cell…

  10. Application of video imaging for improvement of patient set-up

    International Nuclear Information System (INIS)

    Ploeger, Lennert S.; Frenay, Michel; Betgen, Anja; Bois, Josien A. de; Gilhuijs, Kenneth G.A.; Herk, Marcel van

    2003-01-01

    Background and purpose: For radiotherapy of prostate cancer, the patient is usually positioned in the left-right (LR) direction by aligning a single marker on the skin with the projection of a room laser. The aim of this study is to investigate the feasibility of a room-mounted video camera in combination with previously acquired CT data to improve patient set-up along the LR axis. Material and methods: The camera was mounted in the treatment room at the caudal side of the patient. For 22 patients with prostate cancer 127 video and portal images were acquired. The set-up error determined by video imaging was found by matching video images with rendered CT images using various techniques. This set-up error was retrospectively compared with the set-up error derived from portal images. It was investigated whether the number of corrections based on portal imaging would decrease if the information obtained from the video images had been used prior to irradiation. Movement of the skin with respect to bone was quantified using an analysis of variance method. Results: The measurement of the set-up error was most accurate for a technique where outlines and groins on the left and right side of the patient were delineated and aligned individually to the corresponding features extracted from the rendered CT image. The standard deviations (SD) of the systematic and random components of the set-up errors derived from the portal images in the LR direction were 1.5 and 2.1 mm, respectively. When the set-up of the patients was retrospectively adjusted based on the video images, the SD of the systematic and random errors decreased to 1.1 and 1.3 mm, respectively. From retrospective analysis, a reduction of the number of set-up corrections (from nine to six corrections) is expected when the set-up would have been adjusted using the video images. The SD of the magnitude of motion of the skin of the patient with respect to the bony anatomy was estimated to be 1.1 mm. Conclusion: Video

  11. Image ranking in video sequences using pairwise image comparisons and temporal smoothing

    CSIR Research Space (South Africa)

    Burke, Michael

    2016-12-01

    Full Text Available The ability to predict the importance of an image is highly desirable in computer vision. This work introduces an image ranking scheme suitable for use in video or image sequences. Pairwise image comparisons are used to determine image ‘interest...

  12. The art of assessing quality for images and video

    International Nuclear Information System (INIS)

    Deriche, M.

    2011-01-01

    The early years of this century have witnessed a tremendous growth in the use of digital multimedia data for di?erent communication applications. Researchers from around the world are spending substantial research efforts in developing techniques for improving the appearance of images/video. However, as we know, preserving high quality is a challenging task. Images are subject to distortions during acquisition, compression, transmission, analysis, and reconstruction. For this reason, the research area focusing on image and video quality assessment has attracted a lot of attention in recent years. In particular, compression applications and other multimedia applications need powerful techniques for evaluating quality objectively without human interference. This tutorial will cover the di?erent faces of image quality assessment. We will motivate the need for robust image quality assessment techniques, then discuss the main algorithms found in the literature with a critical perspective. We will present the di?erent metrics used for full reference, reduced reference and no reference applications. We will then discuss the difference between image and video quality assessment. In all of the above, we will take a critical approach to explain which metric can be used for which application. Finally we will discuss the different approaches to analyze the performance of image/video quality metrics, and end the tutorial with some perspectives on newly introduced metrics and their potential applications.

  13. Introducing video recording in primary care midwifery for research purposes: procedure, dataset, and use.

    NARCIS (Netherlands)

    Spelten, E.R.; Martin, L.; Gitsels, J.T.; Pereboom, M.T.R.; Hutton, E.K.; Dulmen, S. van

    2015-01-01

    Background: video recording studies have been found to be complex; however very few studies describe the actual introduction and enrolment of the study, the resulting dataset and its interpretation. In this paper we describe the introduction and the use of video recordings of health care provider

  14. Progress in video immersion using Panospheric imaging

    Science.gov (United States)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  15. Video Vortex reader II: moving images beyond YouTube

    NARCIS (Netherlands)

    Lovink, G.; Somers Miles, R.

    2011-01-01

    Video Vortex Reader II is the Institute of Network Cultures' second collection of texts that critically explore the rapidly changing landscape of online video and its use. With the success of YouTube ('2 billion views per day') and the rise of other online video sharing platforms, the moving image

  16. Checking Interceptions and Audio Video Recordings by the Court after Referral

    Directory of Open Access Journals (Sweden)

    Sandra Grădinaru

    2012-05-01

    Full Text Available In any event, the prosecutor and the judiciary should pay particular attention to the risk of theirfalsification, which can be achieved by taking only parts of conversations or communications that took place in thepast and are declared to be registered recently, or by removing parts of conversations or communications, or evenby the translation or removal of images. This is why the legislature provided an express provision for theirverification. Provisions of art. 916 Paragraph 1 Criminal Procedure Code offers the possibility of a technicalexpertise regarding the originality and continuity of the records, at the prosecutor's request, the parties or exofficio, where there are doubts about the correctness of the registration in whole or in part, especially if notsupported by all the evidence. Therefore, audio or video recordings serve themselves as evidence in criminalproceedings, if not appealed or confirmed by technical expertise, if there were doubts about their conformity withreality. In the event that there is lack of expertise from the authenticity of records, they will not be accepted asevidence in solving a criminal case, thus eliminating any probative value of the intercepted conversations andcommunications in that case, by applying article 64 Par. 2 Criminal Procedure Code.

  17. Algorithm for Video Summarization of Bronchoscopy Procedures

    Directory of Open Access Journals (Sweden)

    Leszczuk Mikołaj I

    2011-12-01

    Full Text Available Abstract Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions

  18. 75 FR 63434 - Availability of Compliance Guide for the Use of Video or Other Electronic Monitoring or Recording...

    Science.gov (United States)

    2010-10-15

    ...] Availability of Compliance Guide for the Use of Video or Other Electronic Monitoring or Recording Equipment in... the availability of a compliance guide on the use of video or other electronic monitoring or recording... providing this draft guide to advise establishments that video or other electronic monitoring or recording...

  19. Radiation effects on video imagers

    International Nuclear Information System (INIS)

    Yates, G.J.; Bujnosek, J.J.; Jaramillo, S.A.; Walton, R.B.; Martinez, T.M.; Black, J.P.

    1985-01-01

    Radiation sensitivity of several photoconductive, photoemissive, and solid state silicon-based video imagers was measured by analyzing stored photocharge induced by irradiation with continuous and pulsed sources of high energy photons and neutrons. Transient effects as functions of absorbed dose, dose rate, fluences, and ionizing particle energy are presented

  20. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  1. Self-Reflection of Video-Recorded High-Fidelity Simulations and Development of Clinical Judgment.

    Science.gov (United States)

    Bussard, Michelle E

    2016-09-01

    Nurse educators are increasingly using high-fidelity simulators to improve prelicensure nursing students' ability to develop clinical judgment. Traditionally, oral debriefing sessions have immediately followed the simulation scenarios as a method for students to connect theory to practice and therefore develop clinical judgment. Recently, video recording of the simulation scenarios is being incorporated. This qualitative, interpretive description study was conducted to identify whether self-reflection on video-recorded high-fidelity simulation (HFS) scenarios helped prelicensure nursing students to develop clinical judgment. Tanner's clinical judgment model was the framework for this study. Four themes emerged from this study: Confidence, Communication, Decision Making, and Change in Clinical Practice. This study indicated that self-reflection of video-recorded HFS scenarios is beneficial for prelicensure nursing students to develop clinical judgment. [J Nurs Educ. 2016;55(9):522-527.]. Copyright 2016, SLACK Incorporated.

  2. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Kumar, R.

    2002-01-01

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  3. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Seymour Rowan

    2008-01-01

    Full Text Available Abstract We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  4. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Ji Ming

    2008-03-01

    Full Text Available We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  5. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  6. Heterogeneity image patch index and its application to consumer video summarization.

    Science.gov (United States)

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  7. A kind of video image digitizing circuit based on computer parallel port

    International Nuclear Information System (INIS)

    Wang Yi; Tang Le; Cheng Jianping; Li Yuanjing; Zhang Binquan

    2003-01-01

    A kind of video images digitizing circuit based on parallel port was developed to digitize the flash x ray images in our Multi-Channel Digital Flash X ray Imaging System. The circuit can digitize the video images and store in static memory. The digital images can be transferred to computer through parallel port and can be displayed, processed and stored. (authors)

  8. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  9. The distinguishing motor features of cataplexy: a study from video-recorded attacks.

    Science.gov (United States)

    Pizza, Fabio; Antelmi, Elena; Vandi, Stefano; Meletti, Stefano; Erro, Roberto; Baumann, Christian R; Bhatia, Kailash P; Dauvilliers, Yves; Edwards, Mark J; Iranzo, Alex; Overeem, Sebastiaan; Tinazzi, Michele; Liguori, Rocco; Plazzi, Giuseppe

    2018-05-01

    To describe the motor pattern of cataplexy and to determine its phenomenological differences from pseudocataplexy in the differential diagnosis of episodic falls. We selected 30 video-recorded cataplexy and 21 pseudocataplexy attacks in 17 and 10 patients evaluated for suspected narcolepsy and with final diagnosis of narcolepsy type 1 and conversion disorder, respectively, together with self-reported attacks features, and asked expert neurologists to blindly evaluate the motor features of the attacks. Video documented and self-reported attack features of cataplexy and pseudocataplexy were contrasted. Video-recorded cataplexy can be positively differentiated from pseudocataplexy by the occurrence of facial hypotonia (ptosis, mouth opening, tongue protrusion) intermingled by jerks and grimaces abruptly interrupting laughter behavior (i.e. smile, facial expression) and postural control (head drops, trunk fall) under clear emotional trigger. Facial involvement is present in both partial and generalized cataplexy. Conversely, generalized pseudocataplexy is associated with persistence of deep tendon reflexes during the attack. Self-reported features confirmed the important role of positive emotions (laughter, telling a joke) in triggering the attacks, as well as the more frequent occurrence of partial body involvement in cataplexy compared with pseudocataplexy. Cataplexy is characterized by abrupt facial involvement during laughter behavior. Video recording of suspected cataplexy attacks allows the identification of positive clinical signs useful for diagnosis and, possibly in the future, for severity assessment.

  10. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  11. Overview of image processing tools to extract physical information from JET videos

    Science.gov (United States)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  12. Overview of image processing tools to extract physical information from JET videos

    International Nuclear Information System (INIS)

    Craciunescu, T; Tiseanu, I; Zoita, V; Murari, A; Gelfusa, M

    2014-01-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  13. Surgeon-Manipulated Live Surgery Video Recording Apparatuses: Personal Experience and Review of Literature.

    Science.gov (United States)

    Kapi, Emin

    2017-06-01

    Visual recording of surgical procedures is a method that is used quite frequently in practices of plastic surgery. While presentations containing photographs are quite common in education seminars and congresses, video-containing presentations find more favour. For this reason, the presentation of surgical procedures in the form of real-time video display has increased especially recently. Appropriate technical equipment for video recording is not available in most hospitals, so there is a need to set up external apparatus in the operating room. Among these apparatuses can be listed such options as head-mounted video cameras, chest-mounted cameras, and tripod-mountable cameras. The head-mounted video camera is an apparatus that is capable of capturing high-resolution and detailed close-up footage. The tripod-mountable camera enables video capturing from a fixed point. Certain user-specific modifications can be made to overcome some of these restrictions. Among these modifications, custom-made applications are one of the most effective solutions. The article makes an attempt to present the features and experiences concerning the use of a combination of a head- or chest-mounted action camera, a custom-made portable tripod apparatus of versatile features, and an underwater camera. The descriptions we used are quite easy-to-assembly, quickly installed, and inexpensive apparatuses that do not require specific technical knowledge and can be manipulated by the surgeon personally in all procedures. The author believes that video recording apparatuses will be integrated more to the operating room, become a standard practice, and become more enabling for self-manipulation by the surgeon in the near future. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  14. Computational multispectral video imaging [Invited].

    Science.gov (United States)

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  15. Forensic applications of infrared imaging for the detection and recording of latent evidence.

    Science.gov (United States)

    Lin, Apollo Chun-Yen; Hsieh, Hsing-Mei; Tsai, Li-Chin; Linacre, Adrian; Lee, James Chun-I

    2007-09-01

    We report on a simple method to record infrared (IR) reflected images in a forensic science context. Light sources using ultraviolet light have been used previously in the detection of latent prints, but the use of infrared light has been subjected to less investigation. IR light sources were used to search for latent evidence and the images were captured by either video or using a digital camera with a CCD array sensitive to IR wavelength. Bloodstains invisible to the eye, inks, tire prints, gunshot residue, and charred document on dark background are selected as typical matters that may be identified during a forensic investigation. All the evidence types could be detected and identified using a range of photographic techniques. In this study, a one in eight times dilution of blood could be detected on 10 different samples of black cloth. When using 81 black writing inks, the observation rates were 95%, 88% and 42% for permanent markers, fountain pens and ball-point pens, respectively, on the three kinds of dark cloth. The black particles of gunshot residue scattering around the entrance hole under IR light were still observed at a distance of 60 cm from three different shooting ranges. A requirement of IR reflectivity is that there is a contrast between the latent evidence and the background. In the absence of this contrast no latent image will be detected, which is similar to all light sources. The use of a video camera allows the recording of images either at a scene or in the laboratory. This report highlights and demonstrates the robustness of IR to detect and record the presence of latent evidence.

  16. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  17. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  18. Image and Video for Hearing Impaired People

    Directory of Open Access Journals (Sweden)

    Aran Oya

    2007-01-01

    Full Text Available We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL and the cued speech (CS language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.

  19. Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.

    Science.gov (United States)

    Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan

    2018-02-27

    In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights into how to construct such a device using off-the-shelf components. © 2018. Published by The Company of Biologists Ltd.

  20. On-line video image processing system for real-time neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Fujine, S; Yoneda, K; Kanda, K [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.

    1983-09-15

    The neutron radiography system installed at the E-2 experimental hole of the KUR (Kyoto University Reactor) has been used for some NDT applications in the nuclear field. The on-line video image processing system of this facility is introduced in this paper. A 0.5 mm resolution in images was obtained by using a super high quality TV camera developed for X-radiography viewing a NE-426 neutron-sensitive scintillator. The image of the NE-426 on a CRT can be observed directly and visually, thus many test samples can be sequentially observed when necessary for industrial purposes. The video image signals from the TV camera are digitized, with a 33 ms delay, through a video A/D converter (ADC) and can be stored in the image buffer (32 KB DRAM) of a microcomputer (Z-80) system. The digitized pictures are taken with 16 levels of gray scale and resolved to 240 x 256 picture elements (pixels) on a monochrome CRT, with the capability also to display 16 distinct colors on a RGB video display. The direct image of this system could be satisfactory for penetrating the side plates to test MTR type reactor fuels and for the investigation of moving objects.

  1. Analysis of physiological responses associated with emotional changes induced by viewing video images of dental treatments.

    Science.gov (United States)

    Sekiya, Taki; Miwa, Zenzo; Tsuchihashi, Natsumi; Uehara, Naoko; Sugimoto, Kumiko

    2015-03-30

    Since the understanding of emotional changes induced by dental treatments is important for dentists to provide a safe and comfortable dental treatment, we analyzed physiological responses during watching video images of dental treatments to search for the appropriate objective indices reflecting emotional changes. Fifteen healthy young adult subjects voluntarily participated in the present study. Electrocardiogram (ECG), electroencephalogram (EEG) and corrugator muscle electromyogram (EMG) were recorded and changes of them by viewing videos of dental treatments were analyzed. The subjective discomfort level was acquired by Visual Analog Scale method. Analyses of autonomic nervous activities from ECG and four emotional factors (anger/stress, joy/satisfaction, sadness/depression and relaxation) from EEG demonstrated that increases in sympathetic nervous activity reflecting stress increase and decreases in relaxation level were induced by the videos of infiltration anesthesia and cavity excavation, but not intraoral examination. The corrugator muscle activity was increased by all three images regardless of video contents. The subjective discomfort during watching infiltration anesthesia and cavity excavation was higher than intraoral examination, showing that sympathetic activities and relaxation factor of emotion changed in a manner consistent with subjective emotional changes. These results suggest that measurement of autonomic nervous activities estimated from ECG and emotional factors analyzed from EEG is useful for objective evaluation of subjective emotion.

  2. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  3. Linear array of photodiodes to track a human speaker for video recording

    International Nuclear Information System (INIS)

    DeTone, D; Neal, H; Lougheed, R

    2012-01-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant– the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting–a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  4. Linear array of photodiodes to track a human speaker for video recording

    Science.gov (United States)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  5. Comparative study of image registration techniques for bladder video-endoscopy

    Science.gov (United States)

    Ben Hamadou, Achraf; Soussen, Charles; Blondel, Walter; Daul, Christian; Wolf, Didier

    2009-07-01

    Bladder cancer is widely spread in the world. Many adequate diagnosis techniques exist. Video-endoscopy remains the standard clinical procedure for visual exploration of the bladder internal surface. However, video-endoscopy presents the limit that the imaged area for each image is about nearly 1 cm2. And, lesions are, typically, spread over several images. The aim of this contribution is to assess the performance of two mosaicing algorithms leading to the construction of panoramic maps (one unique image) of bladder walls. The quantitative comparison study is performed on a set of real endoscopic exam data and on simulated data relative to bladder phantom.

  6. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents' Perspectives.

    Science.gov (United States)

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O'Connor, Alexander; Collins, Michael J

    This study examined adolescents' attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one's attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players' attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents' social cognitive judgments.

  7. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  8. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  9. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  10. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  11. Evaluation of video-printer images as secondary CT images for clinical use

    International Nuclear Information System (INIS)

    Doi, K.; Rubin, J.

    1983-01-01

    Video-printer (VP) images of 24 abnormal views from a body CT scanner were made. Although the physical quality of printer images was poor, a group of radiologists and clinicians found that VP images are adequate to confirm the lesion described in the radiology report. The VP images can be used as secondary images, and they can be attached to a report as a part of the radiology service to increase communication between radiologists and clinicians and to prevent the loss of primary images from the radiology file

  12. GPM GROUND VALIDATION PRECIPITATION VIDEO IMAGER (PVI) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Precipitation Video Imager (PVI) GCPEx dataset collected precipitation particle images and drop size distribution data from November 2011...

  13. Evaluation of video capture equipment for secondary image acquisition in the PACS.

    Science.gov (United States)

    Sukenobu, Yoshiharu; Sasagaki, Michihiro; Hirabuki, Norio; Naito, Hiroaki; Narumi, Yoshifumi; Inamura, Kiyonari

    2002-01-01

    There are many cases in which picture archiving and communication systems (PACS) are built with old-type existing modalities with no DICOM output. One of the methods for interfacing them to the PACS is to implement video capture (/ frame grabber) equipment. This equipment takes analog video signal output from medical imaging modalities, and amplitude of the video signal is A/D converted and supplied to the PACS. In this report, we measured and evaluated the accuracy at which this video capture equipment could capture the image. From the physical evaluation, we found the pixel values of an original image and its captured image were almost equal in gray level from 20%-90%. The change in the pixel values of a captured image was +/-3 on average. The change of gray level concentration was acceptable and had an average standard deviation of around 0.63. As for resolution, the degradation was observed at the highest physical level. In a subjective evaluation, the evaluation value of the CT image had a grade of 2.81 on the average (the same quality for a reference image was set to a grade of 3.0). Abnormalities in heads, chests, and abdomens were judged not to influence diagnostic accuracy. Some small differences were seen when comparing captured and reference images, but they are recognized as having no influence on the diagnoses.

  14. Video-rate optical flow corrected intraoperative functional fluorescence imaging

    NARCIS (Netherlands)

    Koch, Maximilian; Glatz, Juergen; Ermolayev, Vladimir; de Vries, Elisabeth G. E.; van Dam, Gooitzen M.; Englmeier, Karl-Hans; Ntziachristos, Vasilis

    Intraoperative fluorescence molecular imaging based on targeted fluorescence agents is an emerging approach to improve surgical and endoscopic imaging and guidance. Short exposure times per frame and implementation at video rates are necessary to provide continuous feedback to the physician and

  15. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    Science.gov (United States)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  16. Design of a system based on DSP and FPGA for video recording and replaying

    Science.gov (United States)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA

  17. Recent advances in intelligent image search and video retrieval

    CERN Document Server

    2017-01-01

    This book initially reviews the major feature representation and extraction methods and effective learning and recognition approaches, which have broad applications in the context of intelligent image search and video retrieval. It subsequently presents novel methods, such as improved soft assignment coding, Inheritable Color Space (InCS) and the Generalized InCS framework, the sparse kernel manifold learner method, the efficient Support Vector Machine (eSVM), and the Scale-Invariant Feature Transform (SIFT) features in multiple color spaces. Lastly, the book presents clothing analysis for subject identification and retrieval, and performance evaluation methods of video analytics for traffic monitoring. Digital images and videos are proliferating at an amazing speed in the fields of science, engineering and technology, media and entertainment. With the huge accumulation of such data, keyword searches and manual annotation schemes may no longer be able to meet the practical demand for retrieving relevant conte...

  18. First high speed imaging of lightning from summer thunderstorms over India: Preliminary results based on amateur recording using a digital camera

    Science.gov (United States)

    Narayanan, V. L.

    2017-12-01

    For the first time, high speed imaging of lightning from few isolated tropical thunderstorms are observed from India. The recordings are made from Tirupati (13.6oN, 79.4oE, 180 m above mean sea level) during summer months with a digital camera capable of recording high speed videos up to 480 fps. At 480 fps, each individual video file is recorded for 30 s resulting in 14400 deinterlaced images per video file. An automatic processing algorithm is developed for quick identification and analysis of the lightning events which will be discussed in detail. Preliminary results indicating different types of phenomena associated with lightning like stepped leader, dart leader, luminous channels corresponding to continuing current and M components are discussed. While most of the examples show cloud to ground discharges, few interesting cases of intra-cloud, inter-cloud and cloud-air discharges will also be displayed. This indicates that though high speed cameras with few 1000 fps are preferred for a detailed study on lightning, moderate range CMOS sensor based digital cameras can provide important information as well. The lightning imaging activity presented herein is initiated as an amateur effort and currently plans are underway to propose a suite of supporting instruments to conduct coordinated campaigns. The images discussed here are acquired from normal residential area and indicate how frequent lightning strikes are in such tropical locations during thunderstorms, though no towering structures are nearby. It is expected that popularizing of such recordings made with affordable digital cameras will trigger more interest in lightning research and provide a possible data source from amateur observers paving the way for citizen science.

  19. Simultaneous recording of EEG and electromyographic polygraphy increases the diagnostic yield of video-EEG monitoring.

    Science.gov (United States)

    Hill, Aron T; Briggs, Belinda A; Seneviratne, Udaya

    2014-06-01

    To investigate the usefulness of adjunctive electromyographic (EMG) polygraphy in the diagnosis of clinical events captured during long-term video-EEG monitoring. A total of 40 patients (21 women, 19 men) aged between 19 and 72 years (mean 43) investigated using video-EEG monitoring were studied. Electromyographic activity was simultaneously recorded with EEG in four patients selected on clinical grounds. In these patients, surface EMG electrodes were placed over muscles suspected to be activated during a typical clinical event. Of the 40 patients investigated, 24 (60%) were given a diagnosis, whereas 16 (40%) remained undiagnosed. All four patients receiving adjunctive EMG polygraphy obtained a diagnosis, with three of these diagnoses being exclusively reliant on the EMG recordings. Specifically, one patient was diagnosed with propriospinal myoclonus, another patient was diagnosed with facio-mandibular myoclonus, and a third patient was found to have bruxism and periodic leg movements of sleep. The information obtained from surface EMG recordings aided the diagnosis of clinical events captured during video-EEG monitoring in 7.5% of the total cohort. This study suggests that EEG-EMG polygraphy may be used as a technique of improving the diagnostic yield of video-EEG monitoring in selected cases.

  20. 3D reconstruction of cystoscopy videos for comprehensive bladder records

    OpenAIRE

    Lurie, Kristen L.; Angst, Roland; Zlatev, Dimitar V.; Liao, Joseph C.; Ellerbee Bowden, Audrey K.

    2017-01-01

    White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize ...

  1. Moving object detection in top-view aerial videos improved by image stacking

    Science.gov (United States)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  2. From image captioning to video summary using deep recurrent networks and unsupervised segmentation

    Science.gov (United States)

    Morosanu, Bogdan-Andrei; Lemnaru, Camelia

    2018-04-01

    Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.

  3. Computer simulation of orthognathic surgery with video imaging

    Science.gov (United States)

    Sader, Robert; Zeilhofer, Hans-Florian U.; Horch, Hans-Henning

    1994-04-01

    Patients with extreme jaw imbalance must often undergo operative corrections. The goal of therapy is to harmonize the stomatognathic system and an aesthetical correction of the face profile. A new procedure will be presented which supports the maxillo-facial surgeon in planning the operation and which also presents the patient the result of the treatment by video images. Once an x-ray has been digitized it is possible to produce individualized cephalometric analyses. Using a ceph on screen, all current orthognathic operations can be simulated, whereby the bony segments are moved according to given parameters, and a new soft tissue profile can be calculated. The profile of the patient is fed into the computer by way of a video system and correlated to the ceph. Using the simulated operation the computer calculates a new video image of the patient which presents the expected postoperative appearance. In studies of patients treated between 1987-91, 76 out of 121 patients were able to be evaluated. The deviation in profile change varied between .0 and 1.6mm. A side effect of the practical applications was an increase in patient compliance.

  4. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  5. Comparison of cardiopulmonary resuscitation techniques using video camera recordings.

    OpenAIRE

    Mann, C J; Heyworth, J

    1996-01-01

    OBJECTIVE--To use video recordings to compare the performance of resuscitation teams in relation to their previous training in cardiac resuscitation. METHODS--Over a 10 month period all cardiopulmonary resuscitations carried out in an accident and emergency (A&E) resuscitation room were videotaped. The following variables were monitored: (1) time to perform three defibrillatory shocks; (2) time to give intravenous adrenaline (centrally or peripherally); (3) the numbers and grade of medical an...

  6. What do we do with all this video? Better understanding public engagement for image and video annotation

    Science.gov (United States)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  7. Video as a Metaphorical Eye: Images of Positionality, Pedagogy, and Practice

    Science.gov (United States)

    Hamilton, Erica R.

    2012-01-01

    Considered by many to be cost-effective and user-friendly, video technology is utilized in a multitude of contexts, including the university classroom. One purpose, although not often used, involves recording oneself teaching. This autoethnographic study focuses on the author's use of video and reflective practice in order to capture and examine…

  8. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    Science.gov (United States)

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  9. Performance of a video-image-subtraction-based patient positioning system

    International Nuclear Information System (INIS)

    Milliken, Barrett D.; Rubin, Steven J.; Hamilton, Russell J.; Johnson, L. Scott; Chen, George T.Y.

    1997-01-01

    Purpose: We have developed and tested an interactive video system that utilizes image subtraction techniques to enable high precision patient repositioning using surface features. We report quantitative measurements of system performance characteristics. Methods and Materials: Video images can provide a high precision, low cost measure of patient position. Image subtraction techniques enable one to incorporate detailed information contained in the image of a carefully verified reference position into real-time images. We have developed a system using video cameras providing orthogonal images of the treatment setup. The images are acquired, processed and viewed using an inexpensive frame grabber and a PC. The subtraction images provide the interactive guidance needed to quickly and accurately place a patient in the same position for each treatment session. We describe the design and implementation of our system, and its quantitative performance, using images both to measure changes in position, and to achieve accurate setup reproducibility. Results: Under clinical conditions (60 cm field of view, 3.6 m object distance), the position of static, high contrast objects could be measured with a resolution of 0.04 mm (rms) in each of two dimensions. The two-dimensional position could be reproduced using the real-time image display with a resolution of 0.15 mm (rms). Two-dimensional measurement resolution of the head of a patient undergoing treatment for head and neck cancer was 0.1 mm (rms), using a lateral view, measuring the variation in position of the nose and the ear over the course of a single radiation treatment. Three-dimensional repositioning accuracy of the head of a healthy volunteer using orthogonal camera views was less than 0.7 mm (systematic error) with an rms variation of 1.2 mm. Setup adjustments based on the video images were typically performed within a few minutes. The higher precision achieved using the system to measure objects than to reposition

  10. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    Science.gov (United States)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  11. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  12. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  13. The next generation borescope -- Video imaging measurement systems as portable as a fiberscope

    International Nuclear Information System (INIS)

    Boyd, C.E.

    1994-01-01

    Today, Remote Visual Inspection (RVI) techniques routinely save industry the significant costs associated with unscheduled shutdowns and equipment disassembly by enabling visual inspection of otherwise inaccessible equipment surfaces with instruments called borescopes. Specific applications in the nuclear industry include heat exchangers, condensers, boiler tubes, steam generators, headers, and other general interior surface inspections. While borescope inspections have achieved widespread utility, their potential applicability and value have been limited by their inability to provide dimensional information about the objects seen. This paper presents a simple, but very accurate measurement technique that enables the inspector to make measurements of objects directly from the borescope image. While used effectively since 1990, the technique is designed for a video imaging borescope and has, therefore, not been available for the shorter length fiberscope applications--until now. On June 6, 1993 Welch Allyn introduced the VideoProbe XL, a video imaging borescope that is as portable and affordable as a one meter fiberscope. This breakthrough not only extends video imaging into the rest of the fiberscope world, but opens the door for them to this measurement capability as well

  14. Using Grounded Theory to Analyze Qualitative Observational Data that is Obtained by Video Recording

    Directory of Open Access Journals (Sweden)

    Colin Griffiths

    2013-06-01

    Full Text Available This paper presents a method for the collection and analysis of qualitative data that is derived by observation and that may be used to generate a grounded theory. Video recordings were made of the verbal and non-verbal interactions of people with severe and complex disabilities and the staff who work with them. Three dyads composed of a student/teacher or carer and a person with a severe or profound intellectual disability were observed in a variety of different activities that took place in a school. Two of these recordings yielded 25 minutes of video, which was transcribed into narrative format. The nature of the qualitative micro data that was captured is described and the fit between such data and classic grounded theory is discussed. The strengths and weaknesses of the use of video as a tool to collect data that is amenable to analysis using grounded theory are considered. The paper concludes by suggesting that using classic grounded theory to analyze qualitative data that is collected using video offers a method that has the potential to uncover and explain patterns of non-verbal interactions that were not previously evident.

  15. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  16. Context indexing of digital cardiac ultrasound records in PACS

    Science.gov (United States)

    Lobodzinski, S. Suave; Meszaros, Georg N.

    1998-07-01

    Recent wide adoption of the DICOM 3.0 standard by ultrasound equipment vendors created a need for practical clinical implementations of cardiac imaging study visualization, management and archiving, DICOM 3.0 defines only a logical and physical format for exchanging image data (still images, video, patient and study demographics). All DICOM compliant imaging studies must presently be archived on a 650 Mb recordable compact disk. This is a severe limitation for ultrasound applications where studies of 3 to 10 minutes long are a common practice. In addition, DICOM digital echocardiography objects require physiological signal indexing, content segmentation and characterization. Since DICOM 3.0 is an interchange standard only, it does not define how to database composite video objects. The goal of this research was therefore to address the issues of efficient storage, retrieval and management of DICOM compliant cardiac video studies in a distributed PACS environment. Our Web based implementation has the advantage of accommodating both DICOM defined entity-relation modules (equipment data, patient data, video format, etc.) in standard relational database tables and digital indexed video with its attributes in an object relational database. Object relational data model facilitates content indexing of full motion cardiac imaging studies through bi-directional hyperlink generation that tie searchable video attributes and related objects to individual video frames in the temporal domain. Benefits realized from use of bi-directionally hyperlinked data models in an object relational database include: (1) real time video indexing during image acquisition, (2) random access and frame accurate instant playback of previously recorded full motion imaging data, and (3) time savings from faster and more accurate access to data through multiple navigation mechanisms such as multidimensional queries on an index, queries on a hyperlink attribute, free search and browsing.

  17. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  18. LIDAR-INCORPORATED TRAFFIC SIGN DETECTION FROM VIDEO LOG IMAGES OF MOBILE MAPPING SYSTEM

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-06-01

    Full Text Available Mobile Mapping System (MMS simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the

  19. New operator's console recorder

    International Nuclear Information System (INIS)

    Anon.

    2009-01-01

    This article described a software module that automatically records images being shown on multiple HMI or SCADA operator's displays. Videos used for monitoring activities at industrial plants can be combined with the operator console videos and data from a process historian. This enables engineers, analysts or investigators to see what is occurring in the plant, what the operator is seeing on the HMI screen, and all relevant real-time data from an event. In the case of a leak at a pumping station, investigators could watch plant video taken at a remote site showing fuel oil creeping across the floor, real-time data being acquired from pumps, valves and the receiving tank while the leak is occurring. The video shows the operator's HMI screen as well as the alarm screen that signifies the leak detection. The Longwatch Operator's Console Recorder and Video Historian are used together to acquire data about actual plant plant management because they show everything that happens during an event. The Console Recorder automatically retrieves and replays operator displays by clicking on a time-based alarm or system message. Play back of video feed is a valuable tool for training and analysis purposes, and can help mitigate insurance and regulatory issues by eliminating uncertainty and conjecture. 1 fig.

  20. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  1. Disembodied perspective: third-person images in GoPro videos

    OpenAIRE

    Bédard, Philippe

    2015-01-01

    Used as much in extreme-sports videos and professional productions as in amateur and home videos, GoPro wearable cameras have become ubiquitous in contemporary moving image culture. During its swift and ongoing rise in popularity, GoPro has also enabled the creation of new and unusual points of view, among which are “third-person images”. This article introduces and defines this particular phenomenon through an approach that deals with both the aesthetic and technical characteristics of the i...

  2. Video Multiple Watermarking Technique Based on Image Interlacing Using DWT

    Directory of Open Access Journals (Sweden)

    Mohamed M. Ibrahim

    2014-01-01

    Full Text Available Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  3. Video multiple watermarking technique based on image interlacing using DWT.

    Science.gov (United States)

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  4. Let's Make a Movie: Investigating Pre-Service Teachers' Reflections on Using Video Recorded Role Playing Cases in Turkey

    Science.gov (United States)

    Koc, Mustafa

    2011-01-01

    This study examined the potential consequences of using student-filmed video cases in the study of classroom management in teacher education. Pre-service teachers in groups were engaged in video-recorded role playing to simulate classroom memoirs. Each group shared their video cases and interpretations in a class presentation. Qualitative data…

  5. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    Science.gov (United States)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  6. Improvement of Skills in Cardiopulmonary Resuscitation of Pediatric Residents by Recorded Video Feedbacks.

    Science.gov (United States)

    Anantasit, Nattachai; Vaewpanich, Jarin; Kuptanon, Teeradej; Kamalaporn, Haruitai; Khositseth, Anant

    2016-11-01

    To evaluate the pediatric residents' cardiopulmonary resuscitation (CPR) skills, and their improvements after recorded video feedbacks. Pediatric residents from a university hospital were enrolled. The authors surveyed the level of pediatric resuscitation skill confidence by a questionnaire. Eight psychomotor skills were evaluated individually, including airway, bag-mask ventilation, pulse check, prompt starting and technique of chest compression, high quality CPR, tracheal intubation, intraosseous, and defibrillation. The mock code skills were also evaluated as a team using a high-fidelity mannequin simulator. All the participants attended a concise Pediatric Advanced Life Support (PALS) lecture, and received video-recorded feedback for one hour. They were re-evaluated 6 wk later in the same manner. Thirty-eight residents were enrolled. All the participants had a moderate to high level of confidence in their CPR skills. Over 50 % of participants had passed psychomotor skills, except the bag-mask ventilation and intraosseous skills. There was poor correlation between their confidence and passing the psychomotor skills test. After course feedback, the percentage of high quality CPR skill in the second course test was significantly improved (46 % to 92 %, p = 0.008). The pediatric resuscitation course should still remain in the pediatric resident curriculum and should be re-evaluated frequently. Video-recorded feedback on the pitfalls during individual CPR skills and mock code case scenarios could improve short-term psychomotor CPR skills and lead to higher quality CPR performance.

  7. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  8. In Pursuit of Reciprocity: Researchers, Teachers, and School Reformers Engaged in Collaborative Analysis of Video Records

    Science.gov (United States)

    Curry, Marnie W.

    2012-01-01

    In the ideal, reciprocity in qualitative inquiry occurs when there is give-and-take between researchers and the researched; however, the demands of the academy and resource constraints often make the pursuit of reciprocity difficult. Drawing on two video-based, qualitative studies in which researchers utilized video records as resources to enhance…

  9. EEG in the classroom: Synchronised neural recordings during video presentation

    DEFF Research Database (Denmark)

    Poulsen, Andreas Trier; Kamronn, Simon Due; Dmochowski, Jacek

    2017-01-01

    We performed simultaneous recordings of electroencephalography (EEG) from multiple students in a classroom, and measured the inter-subject correlation (ISC) of activity evoked by a common video stimulus. The neural reliability, as quantified by ISC, has been linked to engagement and attentional......-evoked neural responses, known to be modulated by attention, can be tracked for groups of students with synchronized EEG acquisition. This is a step towards real-time inference of engagement in the classroom....

  10. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method...

  11. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  12. Video-Recorded Validation of Wearable Step Counters under Free-living Conditions.

    Science.gov (United States)

    Toth, Lindsay P; Park, Susan; Springer, Cary M; Feyerabend, McKenzie D; Steeves, Jeremy A; Bassett, David R

    2018-06-01

    The purpose of this study was to determine the accuracy of 14-step counting methods under free-living conditions. Twelve adults (mean ± SD age, 35 ± 13 yr) wore a chest harness that held a GoPro camera pointed down at the feet during all waking hours for 1 d. The GoPro continuously recorded video of all steps taken throughout the day. Simultaneously, participants wore two StepWatch (SW) devices on each ankle (all programmed with different settings), one activPAL on each thigh, four devices at the waist (Fitbit Zip, Yamax Digi-Walker SW-200, New Lifestyles NL-2000, and ActiGraph GT9X (AG)), and two devices on the dominant and nondominant wrists (Fitbit Charge and AG). The GoPro videos were downloaded to a computer and researchers counted steps using a hand tally device, which served as the criterion method. The SW devices recorded between 95.3% and 102.8% of actual steps taken throughout the day (P > 0.05). Eleven step counting methods estimated less than 100% of actual steps; Fitbit Zip, Yamax Digi-Walker SW-200, and AG with the moving average vector magnitude algorithm on both wrists recorded 71% to 91% of steps (P > 0.05), whereas the activPAL, New Lifestyles NL-2000, and AG (without low-frequency extension (no-LFE), moving average vector magnitude) worn on the hip, and Fitbit Charge recorded 69% to 84% of steps (P 0.05), whereas the AG (LFE) on both wrists and the hip recorded 128% to 220% of steps (P < 0.05). Across all waking hours of 1 d, step counts differ between devices. The SW, regardless of settings, was the most accurate method of counting steps.

  13. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  14. The client’s ideas and fantasies of the supervisor in video recorded psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Jensen, Karen Boelt; Madsen, Ninna Skov

    2010-01-01

    Aim: Despite the current relatively widespread use of video as a supervisory tool, there are few empirical studies on how recordings influence the relationship between client and supervisor. This paper presents a qualitative, explorative study of clients’ experience of having their psychotherapy...

  15. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  16. Dynamic Image Stitching for Panoramic Video

    Directory of Open Access Journals (Sweden)

    Jen-Yu Shieh

    2014-10-01

    Full Text Available The design of this paper is based on the Dynamic image titching for panoramic video. By utilizing OpenCV visual function data library and SIFT algorithm as the basis for presentation, this article brings forward Gaussian second differenced MoG which is processed basing on DoG Gaussian Difference Map to reduce order in synthesizing dynamic images and simplify the algorithm of the Gaussian pyramid structure. MSIFT matches with overlapping segmentation method to simplify the scope of feature extraction in order to enhance speed. And through this method traditional image synthesis can be improved without having to take lots of time in calculation and being limited by space and angle. This research uses four normal Webcams and two IPCAM coupled with several-wide angle lenses. By using wide-angle lenses to monitor over a wide range of an area and then by using image stitching panoramic effect is achieved. In terms of overall image application and control interface, Microsoft Visual Studio C# is adopted to a construct software interface. On a personal computer with 2.4-GHz CPU and 2-GB RAM and with the cameras fixed to it, the execution speed is three images per second, which reduces calculation time of the traditional algorithm.

  17. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    Science.gov (United States)

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p competency. Prognostic study, level II.

  18. Application of video recording technology to improve husbandry and reproduction in the carmine bee-eater (Merops n. nubicus).

    Science.gov (United States)

    Ferrie, Gina M; Sky, Christy; Schutz, Paul J; Quinones, Glorieli; Breeding, Shawnlei; Plasse, Chelle; Leighty, Katherine A; Bettinger, Tammie L

    2016-01-01

    Incorporating technology with research is becoming increasingly important to enhance animal welfare in zoological settings. Video technology is used in the management of avian populations to facilitate efficient information collection on aspects of avian reproduction that are impractical or impossible to obtain through direct observation. Disney's Animal Kingdom(®) maintains a successful breeding colony of Northern carmine bee-eaters. This African species is a cavity nester, making their nesting behavior difficult to study and manage in an ex situ setting. After initial research focused on developing a suitable nesting environment, our goal was to continue developing methods to improve reproductive success and increase likelihood of chicks fledging. We installed infrared bullet cameras in five nest boxes and connected them to a digital video recording system, with data recorded continuously through the breeding season. We then scored and summarized nesting behaviors. Using remote video methods of observation provided much insight into the behavior of the birds in the colony's nest boxes. We observed aggression between birds during the egg-laying period, and therefore immediately removed all of the eggs for artificial incubation which completely eliminated egg breakage. We also used observations of adult feeding behavior to refine chick hand-rearing diet and practices. Although many video recording configurations have been summarized and evaluated in various reviews, we found success with the digital video recorder and infrared cameras described here. Applying emerging technologies to cavity nesting avian species is a necessary addition to improving management in and sustainability of zoo avian populations. © 2015 Wiley Periodicals, Inc.

  19. A video imaging system and related control hardware for nuclear safeguards surveillance applications

    International Nuclear Information System (INIS)

    Whichello, J.V.

    1987-03-01

    A novel video surveillance system has been developed for safeguards applications in nuclear installations. The hardware was tested at a small experimental enrichment facility located at the Lucas Heights Research Laboratories. The system uses digital video techniques to store, encode and transmit still television pictures over the public telephone network to a receiver located in the Australian Safeguards Office at Kings Cross, Sydney. A decoded, reconstructed picture is then obtained using a second video frame store. A computer-controlled video cassette recorder is used automatically to archive the surveillance pictures. The design of the surveillance system is described with examples of its operation

  20. Observing the Testing Effect using Coursera Video-recorded Lectures: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Paul Zhihao eYONG

    2016-01-01

    Full Text Available We investigated the testing effect in Coursera video-based learning. One hundred and twenty-three participants either (a studied an instructional video-recorded lecture four times, (b studied the lecture three times and took one recall test, or (c studied the lecture once and took three tests. They then took a final recall test, either immediately or a week later, through which their learning was assessed. Whereas repeated studying produced better recall performance than did repeated testing when the final test was administered immediately, testing produced better performance when the final test was delayed until a week after. The testing effect was observed using Coursera lectures. Future directions are documented.

  1. Anthropocentric Video Segmentation for Lecture Webcasts

    Directory of Open Access Journals (Sweden)

    Rojas Raul

    2007-01-01

    Full Text Available Abstract Many lecture recording and presentation systems transmit slides or chalkboard content along with a small video of the instructor. As a result, two areas of the screen are competing for the viewer's attention, causing the widely known split-attention effect. Face and body gestures, such as pointing, do not appear in the context of the slides or the board. To eliminate this problem, this article proposes to extract the lecturer from the video stream and paste his or her image onto the board or slide image. As a result, the lecturer acting in front of the board or slides becomes the center of attention. The entire lecture presentation becomes more human-centered. This article presents both an analysis of the underlying psychological problems and an explanation of signal processing techniques that are applied in a concrete system. The presented algorithm is able to extract and overlay the lecturer online and in real time at full video resolution.

  2. Anthropocentric Video Segmentation for Lecture Webcasts

    Directory of Open Access Journals (Sweden)

    Raul Rojas

    2008-03-01

    Full Text Available Many lecture recording and presentation systems transmit slides or chalkboard content along with a small video of the instructor. As a result, two areas of the screen are competing for the viewer's attention, causing the widely known split-attention effect. Face and body gestures, such as pointing, do not appear in the context of the slides or the board. To eliminate this problem, this article proposes to extract the lecturer from the video stream and paste his or her image onto the board or slide image. As a result, the lecturer acting in front of the board or slides becomes the center of attention. The entire lecture presentation becomes more human-centered. This article presents both an analysis of the underlying psychological problems and an explanation of signal processing techniques that are applied in a concrete system. The presented algorithm is able to extract and overlay the lecturer online and in real time at full video resolution.

  3. The impact of online video lecture recordings and automated feedback on student performance

    NARCIS (Netherlands)

    Wieling, M. B.; Hofman, W. H. A.

    To what extent a blended learning configuration of face-to-face lectures, online on-demand video recordings of the face-to-face lectures and the offering of online quizzes with appropriate feedback has an additional positive impact on the performance of these students compared to the traditional

  4. Turbulent structure of concentration plumes through application of video imaging

    Energy Technology Data Exchange (ETDEWEB)

    Dabberdt, W.F.; Martin, C. [National Center for Atmospheric Research, Boulder, CO (United States); Hoydysh, W.G.; Holynskyj, O. [Environmental Science & Services Corp., Long Island City, NY (United States)

    1994-12-31

    Turbulent flows and dispersion in the presence of building wakes and terrain-induced local circulations are particularly difficult to simulate with numerical models or measure with conventional fluid modeling and ambient measurement techniques. The problem stems from the complexity of the kinematics and the difficulty in making representative concentration measurements. New laboratory video imaging techniques are able to overcome many of these limitations and are being applied to study a range of difficult problems. Here the authors apply {open_quotes}tomographic{close_quotes} video imaging techniques to the study of the turbulent structure of an ideal elevated plume and the relationship of short-period peak concentrations to long-period average values. A companion paper extends application of the technique to characterization of turbulent plume-concentration fields in the wake of a complex building configuration.

  5. Video Recording and the Research Process

    Science.gov (United States)

    Leung, Constant; Hawkins, Margaret R.

    2011-01-01

    This is a two-part discussion. Part 1 is entitled "English Language Learning in Subject Lessons", and Part 2 is titled "Video as a Research Tool/Counterpoint". Working with different research concerns, the authors attempt to draw attention to a set of methodological and theoretical issues that have emerged in the research process using video data.…

  6. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  7. Analyzing communication skills of Pediatric Postgraduate Residents in Clinical Encounter by using video recordings.

    Science.gov (United States)

    Bari, Attia; Khan, Rehan Ahmed; Jabeen, Uzma; Rathore, Ahsan Waheed

    2017-01-01

    To analyze communication skills of pediatric postgraduate residents in clinical encounter by using video recordings. This qualitative exploratory research was conducted through video recording at The Children's Hospital Lahore, Pakistan. Residents who had attended the mandatory communication skills workshop offered by CPSP were included. The video recording of clinical encounter was done by a trained audiovisual person while the resident was interacting with the patient in the clinical encounter. Data was analyzed by thematic analysis. Initially on open coding 36 codes emerged and then through axial and selective coding these were condensed to 17 subthemes. Out of these four main themes emerged: (1) Courteous and polite attitude, (2) Marginal nonverbal communication skills, (3) Power game/Ignoring child participation and (4) Patient as medical object/Instrumental behaviour. All residents treated the patient as a medical object to reach a right diagnosis and ignored them as a human being. There was dominant role of doctors and marginal nonverbal communication skills were displayed by the residents in the form of lack of social touch, and appropriate eye contact due to documenting notes. A brief non-medical interaction for rapport building at the beginning of interaction was missing and there was lack of child involvement. Paediatric postgraduate residents were polite while communicating with parents and child but lacking in good nonverbal communication skills. Communication pattern in our study was mostly one-way showing doctor's instrumental behaviour and ignoring the child participation.

  8. American video peak store gives fuel a better image

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    A new American image enhancement system using a video peak frame store aims to overcome the common problems of viewing serial numbers on irradiated fuel assemblies within the reactor core whilst reducing operator exposure at the same time. Other nuclear plant inspection applications are envisaged. (author)

  9. Can social tagged images aid concept-based video search?

    NARCIS (Netherlands)

    Setz, A.T.; Snoek, C.G.M.

    2009-01-01

    This paper seeks to unravel whether commonly available social tagged images can be exploited as a training resource for concept-based video search. Since social tags are known to be ambiguous, overly personalized, and often error prone, we place special emphasis on the role of disambiguation. We

  10. Video-recorded simulated patient interactions: can they help develop clinical and communication skills in today's learning environment?

    Science.gov (United States)

    Seif, Gretchen A; Brown, Debora

    2013-01-01

    It is difficult to provide real-world learning experiences for students to master clinical and communication skills. The purpose of this paper is to describe a novel instructional method using self- and peer-assessment, reflection, and technology to help students develop effective interpersonal and clinical skills. The teaching method is described by the constructivist learning theory and incorporates the use of educational technology. The learning activities were incorporated into the pre-clinical didactic curriculum. The students participated in two video-recording assignments and performed self-assessments on each and had a peer-assessment on the second video-recording. The learning activity was evaluated through the self- and peer-assessments and an instructor-designed survey. This evaluation identified several themes related to the assignment, student performance, clinical behaviors and establishing rapport. Overall the students perceived that the learning activities assisted in the development of clinical and communication skills prior to direct patient care. The use of video recordings of a simulated history and examination is a unique learning activity for preclinical PT students in the development of clinical and communication skills.

  11. Content-based video indexing and searching with wavelet transformation

    Science.gov (United States)

    Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah

    2006-05-01

    Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.

  12. Fast optical recording media based on semiconductor nanostructures for image recording and processing

    International Nuclear Information System (INIS)

    Kasherininov, P. G.; Tomasov, A. A.

    2008-01-01

    Fast optical recording media based on semiconductor nanostructures (CdTe, GaAs) for image recording and processing with a speed to 10 6 cycle/s (which exceeds the speed of known recording media based on metal-insulator-semiconductor-(liquid crystal) (MIS-LC) structures by two to three orders of magnitude), a photosensitivity of 10 -2 V/cm 2 , and a spatial resolution of 5-10 (line pairs)/mm are developed. Operating principles of nanostructures as fast optical recording media and methods for reading images recorded in such media are described. Fast optical processors for recording images in incoherent light based on CdTe crystal nanostructures are implemented. The possibility of their application to fabricate image correlators is shown.

  13. Moving object detection in video satellite image based on deep learning

    Science.gov (United States)

    Zhang, Xueyang; Xiang, Junhua

    2017-11-01

    Moving object detection in video satellite image is studied. A detection algorithm based on deep learning is proposed. The small scale characteristics of remote sensing video objects are analyzed. Firstly, background subtraction algorithm of adaptive Gauss mixture model is used to generate region proposals. Then the objects in region proposals are classified via the deep convolutional neural network. Thus moving objects of interest are detected combined with prior information of sub-satellite point. The deep convolution neural network employs a 21-layer residual convolutional neural network, and trains the network parameters by transfer learning. Experimental results about video from Tiantuo-2 satellite demonstrate the effectiveness of the algorithm.

  14. Assessing the Content of YouTube Videos in Educating Patients Regarding Common Imaging Examinations.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Won, Eugene; Doshi, Ankur M

    2016-12-01

    To assess the content of currently available YouTube videos seeking to educate patients regarding commonly performed imaging examinations. After initial testing of possible search terms, the first two pages of YouTube search results for "CT scan," "MRI," "ultrasound patient," "PET scan," and "mammogram" were reviewed to identify educational patient videos created by health organizations. Sixty-three included videos were viewed and assessed for a range of features. Average views per video were highest for MRI (293,362) and mammography (151,664). Twenty-seven percent of videos used a nontraditional format (eg, animation, song, humor). All videos (100.0%) depicted a patient undergoing the examination, 84.1% a technologist, and 20.6% a radiologist; 69.8% mentioned examination lengths, 65.1% potential pain/discomfort, 41.3% potential radiation, 36.5% a radiology report/results, 27.0% the radiologist's role in interpretation, and 13.3% laboratory work. For CT, 68.8% mentioned intravenous contrast and 37.5% mentioned contrast safety. For MRI, 93.8% mentioned claustrophobia, 87.5% noise, 75.0% need to sit still, 68.8% metal safety, 50.0% intravenous contrast, and 0.0% contrast safety. For ultrasound, 85.7% mentioned use of gel. For PET, 92.3% mentioned radiotracer injection, 61.5% fasting, and 46.2% diabetic precautions. For mammography, unrobing, avoiding deodorant, and possible additional images were all mentioned by 63.6%; dense breasts were mentioned by 0.0%. Educational patient videos on YouTube regarding common imaging examinations received high public interest and may provide a valuable patient resource. Videos most consistently provided information detailing the examination experience and less consistently provided safety information or described the presence and role of the radiologist. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  15. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  16. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  17. Automated processing of massive audio/video content using FFmpeg

    Directory of Open Access Journals (Sweden)

    Kia Siang Hock

    2014-01-01

    Full Text Available Audio and video content forms an integral, important and expanding part of the digital collections in libraries and archives world-wide. While these memory institutions are familiar and well-versed in the management of more conventional materials such as books, periodicals, ephemera and images, the handling of audio (e.g., oral history recordings and video content (e.g., audio-visual recordings, broadcast content requires additional toolkits. In particular, a robust and comprehensive tool that provides a programmable interface is indispensable when dealing with tens of thousands of hours of audio and video content. FFmpeg is comprehensive and well-established open source software that is capable of the full-range of audio/video processing tasks (such as encode, decode, transcode, mux, demux, stream and filter. It is also capable of handling a wide-range of audio and video formats, a unique challenge in memory institutions. It comes with a command line interface, as well as a set of developer libraries that can be incorporated into applications.

  18. The advantages of using photographs and video images in ...

    African Journals Online (AJOL)

    Background: The purpose of this study was to evaluate the advantages of a telephone consultation with a specialist in paediatric surgery after taking photographs and video images by a general practitioner for the diagnosis of some diseases. Materials and Methods: This was a prospective study of the reliability of paediatric ...

  19. Effect of a Neonatal Resuscitation Course on Healthcare Providers' Performances Assessed by Video Recording in a Low-Resource Setting.

    Science.gov (United States)

    Trevisanuto, Daniele; Bertuola, Federica; Lanzoni, Paolo; Cavallin, Francesco; Matediana, Eduardo; Manzungu, Olivier Wingi; Gomez, Ermelinda; Da Dalt, Liviana; Putoto, Giovanni

    2015-01-01

    We assessed the effect of an adapted neonatal resuscitation program (NRP) course on healthcare providers' performances in a low-resource setting through the use of video recording. A video recorder, mounted to the radiant warmers in the delivery rooms at Beira Central Hospital, Mozambique, was used to record all resuscitations. One-hundred resuscitations (50 before and 50 after participation in an adapted NRP course) were collected and assessed based on a previously published score. All 100 neonates received initial steps; from these, 77 and 32 needed bag-mask ventilation (BMV) and chest compressions (CC), respectively. There was a significant improvement in resuscitation scores in all levels of resuscitation from before to after the course: for "initial steps", the score increased from 33% (IQR 28-39) to 44% (IQR 39-56), pproviders improved after participation in an adapted NRP course. Video recording was well-accepted by the staff, useful for objective assessment of performance during resuscitation, and can be used as an educational tool in a low-resource setting.

  20. A video event trigger for high frame rate, high resolution video technology

    Science.gov (United States)

    Williams, Glenn L.

    1991-12-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  1. Preliminary study on effects of 60Co γ-irradiation on video quality and the image de-noising methods

    International Nuclear Information System (INIS)

    Yuan Mei; Zhao Jianbin; Cui Lei

    2011-01-01

    There will be variable noises appear on images in video once the play device irradiated by γ-rays, so as to affect the image clarity. In order to eliminate the image noising, the affection mechanism of γ-irradiation on video-play device was studied in this paper and the methods to improve the image quality with both hardware and software were proposed by use of protection program and de-noising algorithm. The experimental results show that the scheme of video de-noising based on hardware and software can improve effectively the PSNR by 87.5 dB. (authors)

  2. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    Science.gov (United States)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  3. First results on video meteors from Crete, Greece

    Science.gov (United States)

    Maravelias, G.

    2012-01-01

    This work presents the first systematic video meteor observations from a, forthcoming permanent, station in Crete, Greece, operating as the first official node within the International Meteor Organization's Video Network. It consists of a Watec 902 H2 Ultimate camera equipped with a Panasonic WV-LA1208 (focal length 12mm, f/0.8) lens running MetRec. The system operated for 42 nights during 2011 (August 19-December 30, 2011) recording 1905 meteors. It is significantly more performant than a previous system used by the author during the Perseids 2010 (DMK camera 21AF04.AS by The Imaging Source, CCTV lens of focal length 2.8 mm, UFO Capture v2.22), which operated for 17 nights (August 4-22, 2010) recording 32 meteors. Differences - according to the author's experience - between the two softwares (MetRec, UFO Capture) are discussed along with a small guide to video meteor hardware.

  4. MAVIS: Mobile Acquisition and VISualization -\\ud a professional tool for video recording on a mobile platform

    OpenAIRE

    Watten, Phil; Gilardi, Marco; Holroyd, Patrick; Newbury, Paul

    2015-01-01

    Professional video recording is a complex process which often requires expensive cameras and large amounts of ancillary equipment.\\ud With the advancement of mobile technologies, cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera and are often used in professional productions.\\ud However, tools that allow professional users to access the information they need to control the technical ...

  5. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  6. Neonatal apneic seizure of occipital lobe origin: continuous video-EEG recording.

    Science.gov (United States)

    Castro Conde, José Ramón; González-Hernández, Tomás; González Barrios, Desiré; González Campo, Candelaria

    2012-06-01

    We present 2 term newborn infants with apneic seizure originating in the occipital lobe that was diagnosed by video-EEG. One infant had ischemic infarction in the distribution of the posterior cerebral artery, extending to the cingulate gyrus. In the other infant, only transient occipital hyperechogenicity was observed by using neurosonography. In both cases, although the critical EEG discharge was observed at the occipital level, the infants presented no clinical manifestations. In patient 1, the discharge extended to the temporal lobe first, with subtle motor manifestations and tachycardia, then synchronously to both hemispheres (with bradypnea/hypopnea), and the background EEG activity became suppressed, at which point the infant experienced apnea. In patient 2, background EEG activity became suppressed right at the end of the focal discharge, coinciding with the appearance of apnea. In neither case did the clinical description by observers coincide with video-EEG findings. The existence of connections between the posterior limbic cortex and the temporal lobe and midbrain respiratory centers may explain the clinical symptoms recorded in these 2 cases. The novel features reported here include video-EEG capture of apneic seizure, ischemic lesion in the territory of the posterior cerebral artery as the cause of apneic seizure, and the appearance of apnea when the epileptiform ictal discharge extended to other cerebral areas or when EEG activity became suppressed. To date, none of these clinical findings have been previously reported. We believe this pathology may in fact be fairly common, but that video-EEG monitoring is essential for diagnosis.

  7. Point-of-View Recording Devices for Intraoperative Neurosurgical Video Capture

    Directory of Open Access Journals (Sweden)

    Jose Luis Porras

    2016-10-01

    Full Text Available AbstractIntroduction: The ability to record and stream neurosurgery is an unprecedented opportunity to further research, medical education, and quality improvement. Here, we appraise the ease of implementation of existing POV devices when capturing and sharing procedures from the neurosurgical operating room, and detail their potential utility in this context.Methods: Our neurosurgical team tested and critically evaluated features of the Google Glass and Panasonic HX-A500 cameras including ergonomics, media quality, and media sharing in both the operating theater and the angiography suite.Results: Existing devices boast several features that facilitate live recording and streaming of neurosurgical procedures. Given that their primary application is not intended for the surgical environment, we identified a number of concrete, yet improvable, limitations.Conclusion: The present study suggests that neurosurgical video capture and live streaming represents an opportunity to contribute to research, education, and quality improvement. Despite this promise, shortcomings render existing devices impractical for serious consideration. We describe the features that future recording platforms should possess to improve upon existing technology.

  8. A real-time remote video streaming platform for ultrasound imaging.

    Science.gov (United States)

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  9. Computerized video interaction self-instruction of MR imaging fundamentals utilizing laser disk technology

    International Nuclear Information System (INIS)

    Genberg, R.W.; Javitt, M.C.; Popky, G.L.; Parker, J.A.; Pinkney, M.N.

    1986-01-01

    Interactive computer-assisted self-instruction is emerging as a recognized didactic modality and is now being introduced to teach physicians the physics of MR imaging. The interactive system consists of a PC-compatible computer, a 12-inch laser disk drive, and a high-resolution monitor. The laser disk, capable of storing 54,000 images, is pressed from a previously edited video tape of MR and video images. The interactive approach is achieved through the use of the computer and appropriate software. The software is written to include computer graphics overlays of the laser disk images, to select interactive branching paths (depending on the user's response to directives or questions), and to provide feedback to the user so that he can assess his performance. One of their systems is available for use in the scientific exhibit area

  10. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    and subjective results on JPEG compressed images, as well as MJPEG and H.264/AVC compressed video, indicate that the proposed algorithms employing directional and spatial fuzzy filters achieve better artifact reduction than other methods. In particular, robust improvements with H.264/AVC video have been gained...

  11. Nesting behavior of Palila, as assessed from video recordings

    Science.gov (United States)

    Laut, M.E.; Banko, P.C.; Gray, E.M.

    2003-01-01

    We quantified nesting behavior of Palila (Loxiodes bailleui), an endangered Hawaiian honeycreeper, by recording at nests during three breeding seasons using a black-and-white video camera connected to a Videocassette recorder. A total of seven nests was observed. We measured the following factors for daylight hours: percentage of time the female was on the nest (attendance), length of attendance bouts by the female, length of nest recesses, and adult provisioning rates. Comparisons were made between three stages of the 40-day nesting cycle: incubation (day 1-day 16), early nestling stage (day 17-day 30 [i.e., nestlings ??? 14 days old]), and late nestling stage (day 31-day 40 [i.e., nestlings > 14 days old]). Of seven nests observed, four fledged at least one nestling and three failed. One of these failed nests was filmed being depredated by a feral cat (Felis catus). Female nest attendance was near 82% during the incubation stage and decreased to 21% as nestlings aged. We did not detect a difference in attendance bout length between stages of the nesting cycle. Mean length of nest recesses increased from 4.5 min during the incubation stage to over 45 min during the late nestling stage. Mean number of nest recesses per hour ranged from 1.6 to 2.0. Food was delivered to nestlings by adults an average of 1.8 times per hour for the early nestling stage and 1.5 times per hour during the late nestling stage and did not change over time. Characterization of parental behavior by video had similarities to but also key differences from findings taken from blind observations. Results from this study will facilitate greater understanding of Palila reproductive strategies.

  12. Sequential error concealment for video/images by weighted template matching

    DEFF Research Database (Denmark)

    Koloda, Jan; Østergaard, Jan; Jensen, Søren Holdt

    2012-01-01

    In this paper we propose a novel spatial error concealment algorithm for video and images based on convex optimization. Block-based coding schemes in packet loss environment are considered. Missing macro blocks are sequentially reconstructed by filling them with a weighted set of templates...

  13. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    Science.gov (United States)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  14. Real-time CT-video registration for continuous endoscopic guidance

    Science.gov (United States)

    Merritt, Scott A.; Rai, Lav; Higgins, William E.

    2006-03-01

    Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.

  15. Quantification of Urine Elimination Behaviors in Cats with a Video Recording System

    OpenAIRE

    R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J.M.

    2017-01-01

    Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...

  16. Visualizing Music: The Archaeology of Music-Video.

    Science.gov (United States)

    Berg, Charles M.

    Music videos, with their characteristic visual energy and frenetic music-and-dance numbers, have caught on rapidly since their introduction in 1981, bringing prosperity to a slumping record industry. Creating images to accompany existing music is, however, hardly a new idea. The concept can be traced back to 1877 and Thomas Edison's invention of…

  17. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  18. Evaluation of regional pulmonary ventilation by videodensitometry using a new X-ray image processor

    International Nuclear Information System (INIS)

    Fujii, Tadashige; Kanai, Hisakata; Handa, Kenjiro; Takizawa, Masaomi

    1988-01-01

    A new video image processing device has been produced in order to assess regional pulmonary ventilation. This device consists of a microcomputer, digital frame memory, digitizer, videomonitor, joystick and videotape recorder. The changing radiographic density of the lungs during deep respiration and forced expiration is recorded by the videotape recorder, which is connected to an image intensifier television system. This device allows the examining physician to place 6 rectangular windows of variable size over any portion of the video image using the joystick, and to measure the brightness level within these windows simultaneously. It is very characteristic that the video-densitometric curve and marks of the windows are superimposed on the frozen final frame of the sampled images. By this procedure, fair videodensigrams were obtained in various respiratory diseases, and reduction of ventilatory amplitude was shown in the hypoventilatory regions. The joint use of video-densitometry and perfusion lung scintigraphy provided helpful information concerning the regional ventilation/perfusion relationship. The videodensitometry of the lung the new X-ray image processor offers routine screening evaluation of regional pulmonary ventilation abnormalities over the entire video image of the lungs without more effort required of the patients. (author)

  19. Video Retrieval Berdasarkan Teks dan Gambar

    Directory of Open Access Journals (Sweden)

    Rahmi Hidayati

    2013-01-01

    Abstract Retrieval video has been used to search a video based on the query entered by user which were text and image. This system could increase the searching ability on video browsing and expected to reduce the video’s retrieval time. The research purposes were designing and creating a software application of retrieval video based on the text and image on the video. The index process for the text is tokenizing, filtering (stopword, stemming. The results of stemming to saved in the text index table. Index process for the image is to create an image color histogram and compute the mean and standard deviation at each primary color red, green and blue (RGB of each image. The results of feature extraction is stored in the image table The process of video retrieval using the query text, images or both. To text query system to process the text query by looking at the text index tables. If there is a text query on the index table system will display information of the video according to the text query. To image query system to process the image query by finding the value of the feature extraction means red, green means, means blue, red standard deviation, standard deviation and standard deviation of blue green. If the value of the six features extracted query image on the index table image will display the video information system according to the query image. To query text and query images, the system will display the video information if the query text and query images have a relationship that is query text and query image has the same film title.   Keywords—  video, index, retrieval, text, image

  20. Portable digital video surveillance system for monitoring flower-visiting bumblebees

    Directory of Open Access Journals (Sweden)

    Thorsdatter Orvedal Aase, Anne Lene

    2011-08-01

    Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.

  1. Signal recovery in imaging photoplethysmography

    International Nuclear Information System (INIS)

    Holton, Benjamin D; Mannapperuma, Kavan; Lesniewski, Peter J; Thomas, John C

    2013-01-01

    Imaging photoplethysmography is an emerging technique for the extraction of biometric information from people using video recordings. The focus is on extracting the cardiac heart rate of the subject by analysing the luminance of the colour video signal and identifying periodic components. Advanced signal processing is needed to recover the information required. In this paper, independent component analysis (ICA), principal component analysis, auto- and cross-correlation are investigated and compared with respect to their effectiveness in extracting the relevant information from video recordings. Results obtained are compared with those recorded by a modern commercial finger pulse oximeter. It is found that ICA produces the most consistent results. (paper)

  2. Signal recovery in imaging photoplethysmography.

    Science.gov (United States)

    Holton, Benjamin D; Mannapperuma, Kavan; Lesniewski, Peter J; Thomas, John C

    2013-11-01

    Imaging photoplethysmography is an emerging technique for the extraction of biometric information from people using video recordings. The focus is on extracting the cardiac heart rate of the subject by analysing the luminance of the colour video signal and identifying periodic components. Advanced signal processing is needed to recover the information required. In this paper, independent component analysis (ICA), principal component analysis, auto- and cross-correlation are investigated and compared with respect to their effectiveness in extracting the relevant information from video recordings. Results obtained are compared with those recorded by a modern commercial finger pulse oximeter. It is found that ICA produces the most consistent results.

  3. Image Analysis of Eccentric Photorefraction

    Directory of Open Access Journals (Sweden)

    J. Dušek

    2004-01-01

    Full Text Available This article deals with image and data analysis of the recorded video-sequences of strabistic infants. It describes a unique noninvasive measuring system based on two measuring methods (position of I. Purkynje image with relation to the centre of the lens and eccentric photorefraction for infants. The whole process is divided into three steps. The aim of the first step is to obtain video sequences on our special system (Eye Movement Analyser. Image analysis of the recorded sequences is performed in order to obtain curves of basic eye reactions (accommodation and convergence. The last step is to calibrate of these curves to corresponding units (diopter and degrees of movement.

  4. Video-EEG recording: a four-year clinical audit.

    LENUS (Irish Health Repository)

    O'Rourke, K

    2012-02-03

    In the setting of a regional neurological unit without an epilepsy surgery service as in our case, video-EEG telemetry is undertaken for three main reasons; to investigate whether frequent paroxysmal events represent seizures when there is clinical doubt, to attempt anatomical localization of partial seizures when standard EEG is unhelpful, and to attempt to confirm that seizures are non-epileptic when this is suspected. A clinical audit of all telemetry performed over a four-year period was carried out, in order to determine the clinical utility of this aspect of the service and to determine means of improving effectiveness in the unit. Analysis of the data showed a high rate of negative studies with no attacks recorded. Of the positive studies approximately 50% showed non-epileptic attacks. Strategies for improving the rate of positive investigations are discussed.

  5. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  6. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  7. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    International Nuclear Information System (INIS)

    Wright, R.; Zander, M.; Brown, S.; Sandoval, D.; Gilpatrick, D.; Gibson, H.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) is discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. (Author) (3 figs., 4 refs.)

  8. Video clip transfer of radiological images using a mobile telephone in emergency neurosurgical consultations (3G Multi-Media Messaging Service).

    Science.gov (United States)

    Waran, Vicknes; Bahuri, Nor Faizal Ahmad; Narayanan, Vairavan; Ganesan, Dharmendra; Kadir, Khairul Azmi Abdul

    2012-04-01

    The purpose of this study was to validate and assess the accuracy and usefulness of sending short video clips in 3gp file format of an entire scan series of patients, using mobile telephones running on 3G-MMS technology, to enable consultation between junior doctors in a neurosurgical unit and the consultants on-call after office hours. A total of 56 consecutive patients with acute neurosurgical problems requiring urgent after-hours consultation during a 6-month period, prospectively had their images recorded and transmitted using the above method. The response to the diagnosis and the management plan by two neurosurgeons (who were not on site) based on the images viewed on a mobile telephone were reviewed by an independent observer and scored. In addition to this, a radiologist reviewed the original images directly on the hospital's Patients Archiving and Communication System (PACS) and this was compared with the neurosurgeons' response. Both neurosurgeons involved in this study were in complete agreement with their diagnosis. The radiologist disagreed with the diagnosis in only one patient, giving a kappa coefficient of 0.88, indicating an almost perfect agreement. The use of mobile telephones to transmit MPEG video clips of radiological images is very advantageous for carrying out emergency consultations in neurosurgery. The images accurately reflect the pathology in question, thereby reducing the incidence of medical errors from incorrect diagnosis, which otherwise may just depend on a verbal description.

  9. Multicamera High Dynamic Range High-Speed Video of Rocket Engine Tests and Launches

    Data.gov (United States)

    National Aeronautics and Space Administration — High-speed video recording of rocket engine tests has several challenges. The scenes that are imaged have both bright and dark regions associated with plume emission...

  10. Video astronomy on the go using video cameras with small telescopes

    CERN Document Server

    Ashley, Joseph

    2017-01-01

    Author Joseph Ashley explains video astronomy's many benefits in this comprehensive reference guide for amateurs. Video astronomy offers a wonderful way to see objects in far greater detail than is possible through an eyepiece, and the ability to use the modern, entry-level video camera to image deep space objects is a wonderful development for urban astronomers in particular, as it helps sidestep the issue of light pollution. The author addresses both the positive attributes of these cameras for deep space imaging as well as the limitations, such as amp glow. The equipment needed for imaging as well as how it is configured is identified with hook-up diagrams and photographs. Imaging techniques are discussed together with image processing (stacking and image enhancement). Video astronomy has evolved to offer great results and great ease of use, and both novices and more experienced amateurs can use this book to find the set-up that works best for them. Flexible and portable, they open up a whole new way...

  11. Architecture of portable electronic medical records system integrated with streaming media.

    Science.gov (United States)

    Chen, Wei; Shih, Chien-Chou

    2012-02-01

    Due to increasing occurrence of accidents and illness during business trips, travel, or overseas studies, the requirement for portable EMR (Electronic Medical Records) has increased. This study proposes integrating streaming media technology into the EMR system to facilitate referrals, contracted laboratories, and disease notification among hospitals. The current study encoded static and dynamic medical images of patients into a streaming video format and stored them in a Flash Media Server (FMS). Based on the Taiwan Electronic Medical Record Template (TMT) standard, EMR records can be converted into XML documents and used to integrate description fields with embedded streaming videos. This investigation implemented a web-based portable EMR interchanging system using streaming media techniques to expedite exchanging medical image information among hospitals. The proposed architecture of the portable EMR retrieval system not only provides local hospital users the ability to acquire EMR text files from a previous hospital, but also helps access static and dynamic medical images as reference for clinical diagnosis and treatment. The proposed method protects property rights of medical images through information security mechanisms of the Medical Record Interchange Service Center and Health Certificate Authorization to facilitate proper, efficient, and continuous treatment of patients.

  12. A low-cost, high-resolution, video-rate imaging optical radar

    Energy Technology Data Exchange (ETDEWEB)

    Sackos, J.T.; Nellums, R.O.; Lebien, S.M.; Diegert, C.F. [Sandia National Labs., Albuquerque, NM (United States); Grantham, J.W.; Monson, T. [Air Force Research Lab., Eglin AFB, FL (United States)

    1998-04-01

    Sandia National Laboratories has developed a unique type of portable low-cost range imaging optical radar (laser radar or LADAR). This innovative sensor is comprised of an active floodlight scene illuminator and an image intensified CCD camera receiver. It is a solid-state device (no moving parts) that offers significant size, performance, reliability, and simplicity advantages over other types of 3-D imaging sensors. This unique flash LADAR is based on low cost, commercially available hardware, and is well suited for many government and commercial uses. This paper presents an update of Sandia`s development of the Scannerless Range Imager technology and applications, and discusses the progress that has been made in evolving the sensor into a compact, low, cost, high-resolution, video rate Laser Dynamic Range Imager.

  13. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction.

    Science.gov (United States)

    Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom

    2018-06-01

    Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.

  14. A method for assessing the regional vibratory pattern of vocal folds by analysing the video recording of stroboscopy.

    Science.gov (United States)

    Lee, J S; Kim, E; Sung, M W; Kim, K H; Sung, M Y; Park, K S

    2001-05-01

    Stroboscopy and kymography have been used to examine the motional abnormality of vocal folds and to visualise their regional vibratory pattern. In a previous study (Laryngoscope, 1999), we introduced the conceptual idea of videostrobokymography, in which we applied the concept of kymography on the pre-recorded video images using stroboscopy, and showed its possible clinical application to various disorders in vocal folds. However, a more detailed description about the software and the mathematical formulation used in this system is needed for the reproduction of similar systems. The composition of hardwares, user-interface and detail procedures including mathematical equations in videostrobokymography software is presented in this study. As an initial clinical trial, videostrobokymography was applied to the preoperative and postoperative videostroboscopic images of 15 patients with Reinke's edema. On preoperative examination, videostrobokymograms showed irregular pattern of mucosal wave and, in some patients, a relatively constant glottic gap during phonation. After the operation, the voice quality of all patients was improved in acoustic and aerodynamic assessments, and videostrobokymography showed clearly improved mucosal waves (change in open quotient: mean +/- SD= 0.11 +/- 0.05).

  15. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  16. Developing an Interface to Order and Document Health Education Videos in the Electronic Health Record.

    Science.gov (United States)

    Wojcik, Lauren

    2015-01-01

    Transitioning to electronic health records (EHRs) provides an opportunity for health care systems to integrate educational content available on interactive patient systems (IPS) with the medical documentation system. This column discusses how one hospital simplified providers' workflow by making it easier to order educational videos and ensure that completed education is documented within the medical record. Integrating the EHR and IPS streamlined the provision of patient education, improved documentation, and supported the organization in meeting core requirements for Meaningful Use.

  17. Coding the Complexity of Activity in Video Recordings

    DEFF Research Database (Denmark)

    Harter, Christopher Daniel; Otrel-Cass, Kathrin

    2017-01-01

    This paper presents a theoretical approach to coding and analyzing video data on human interaction and activity, using principles found in cultural historical activity theory. The systematic classification or coding of information contained in video data on activity can be arduous and time...... Bødker’s in 1996, three possible areas of expansion to Susanne Bødker’s method for analyzing video data were found. Firstly, a technological expansion due to contemporary developments in sophisticated analysis software, since the mid 1990’s. Secondly, a conceptual expansion, where the applicability...... of using Activity Theory outside of the context of human–computer interaction, is assessed. Lastly, a temporal expansion, by facilitating an organized method for tracking the development of activities over time, within the coding and analysis of video data. To expand on the above areas, a prototype coding...

  18. Development Of A Dynamic Radiographic Capability Using High-Speed Video

    Science.gov (United States)

    Bryant, Lawrence E.

    1985-02-01

    High-speed video equipment can be used to optically image up to 2,000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 Kv and 300 Kv constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.

  19. Development of a dynamic radiographic capability using high-speed video

    International Nuclear Information System (INIS)

    Bryant, L.E. Jr.

    1984-01-01

    High-speed video equipment can be used to optically image up to 2000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 kV and 300 kV constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation

  20. Revolutionize Propulsion Test Facility High-Speed Video Imaging with Disruptive Computational Photography Enabling Technology

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced rocket propulsion testing requires high-speed video recording that can capture essential information for NASA during rocket engine flight certification...

  1. Multimodal interaction in image and video applications

    CERN Document Server

    Sappa, Angel D

    2013-01-01

    Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...

  2. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  3. Laser scanning endoscope via an imaging fiber bundle for fluorescence imaging

    Science.gov (United States)

    Yeboah, Lorenz D.; Nestler, Dirk; Steiner, Rudolf W.

    1994-12-01

    Based on a laser scanning endoscope via an imaging fiber bundle, a new approach for a tumor diagnostic system has been developed to assist physicians in the diagnosis before the actual PDT is carried out. Laser induced, spatially resolved fluorescence images of diseased tissue can be compared with images received by video endoscopy using a white light source. The set- up is required to produce a better contrast between infected and healthy tissue and might serve as a constructive diagnostic help for surgeons. The fundamental idea is to scan a low-power laser beam on an imaging fiber bundle and to achieve a spatially resolved projection on the tissue surface. A sufficiently high laser intensity from the diode laser is concentrated on each single spot of the tissue exciting fluorescence when a dye has previously been accumulated. Subsequently, video image of the tissue is recorded and stored. With an image processing unit, video and fluorescence images are overlaid producing a picture of the fluorescence intensity in the environment of the observed tissue.

  4. Quantitative analysis of spider locomotion employing computer-automated video tracking

    DEFF Research Database (Denmark)

    Baatrup, E; Bayley, M

    1993-01-01

    The locomotor activity of adult specimens of the wolf spider Pardosa amentata was measured in an open-field setup, using computer-automated colour object video tracking. The x,y coordinates of the animal in the digitized image of the test arena were recorded three times per second during four...

  5. [Development of Diagrammatic Recording System for Choledochoscope and Its Clinical Application].

    Science.gov (United States)

    Xue, Zhao; Hu, Liangshuo; Tang, Bo; Zhang, Xiaogang; Lyu, Yi

    2017-11-30

    To develop a diagrammatic recording system for choledochoscopy and evaluate the system with clinical application. To match the real-time image and procedure illustration during choledochoscopy examination, we combined video-image capture and speech recognition technology to quickly generate personalized choledochoscopy images and texts records. The new system could be used in sharing territorial electronic medical records, telecommuting, scientific research and education, et al. In the clinical application of 32 patients, the choledochoscopy diagrammatic recording system could significantly improve the surgeons' working efficiency and patients' satisfaction. It could also meet the design requirement of remote information interaction. The choledochoscopy diagrammatic recording system which is recommended could elevate the quality of medical service and promote academic exchange and training.

  6. Video Game Preservation in the UK: A Survey of Records Management Practices

    Directory of Open Access Journals (Sweden)

    Alasdair Bachell

    2014-10-01

    Full Text Available Video games are a cultural phenomenon; a medium like no other that has become one of the largest entertainment sectors in the world. While the UK boasts an enviable games development heritage, it risks losing a major part of its cultural output through an inability to preserve the games that are created by the country’s independent games developers. The issues go deeper than bit rot and other problems that affect all digital media; loss of context, copyright and legal issues, and the throwaway culture of the ‘next’ game all hinder the ability of fans and academics to preserve video games and make them accessible in the future. This study looked at the current attitudes towards preservation in the UK’s independent (‘indie’ video games industry by examining current record-keeping practices and analysing the views of games developers. The results show that there is an interest in preserving games, and possibly a desire to do so, but issues of piracy and cost prevent the industry from undertaking preservation work internally, and from allowing others to assume such responsibility. The recommendation made by this paper is not simply for preservation professionals and enthusiasts to collaborate with the industry, but to do so by advocating the commercial benefits that preservation may offer to the industry.

  7. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  8. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    Science.gov (United States)

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  9. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    Directory of Open Access Journals (Sweden)

    Tominaga Shoji

    2008-01-01

    Full Text Available Abstract The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the most promising research areas in color imaging science. This survey gives an overview about the issues, controversies, and problems of color image science. It focuses on human color vision, perception, and interpretation. It focuses also on acquisition systems, consumer imaging applications, and medical imaging applications. Next it gives a brief overview about the solutions, recommendations, most recent trends, and future trends of color image science. It focuses on color space, appearance models, color difference metrics, and color saliency. It focuses also on color features, color-based object tracking, scene illuminant estimation and color constancy, quality assessment and fidelity assessment, color characterization and calibration of a display device. It focuses on quantization, filtering and enhancement, segmentation, coding and compression, watermarking, and lastly on multispectral color image processing. Lastly, it addresses the research areas which still need addressing and which are the next and future perspectives of color in image and video processing.

  10. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    Directory of Open Access Journals (Sweden)

    Konstantinos N. Plataniotis

    2008-05-01

    Full Text Available The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the most promising research areas in color imaging science. This survey gives an overview about the issues, controversies, and problems of color image science. It focuses on human color vision, perception, and interpretation. It focuses also on acquisition systems, consumer imaging applications, and medical imaging applications. Next it gives a brief overview about the solutions, recommendations, most recent trends, and future trends of color image science. It focuses on color space, appearance models, color difference metrics, and color saliency. It focuses also on color features, color-based object tracking, scene illuminant estimation and color constancy, quality assessment and fidelity assessment, color characterization and calibration of a display device. It focuses on quantization, filtering and enhancement, segmentation, coding and compression, watermarking, and lastly on multispectral color image processing. Lastly, it addresses the research areas which still need addressing and which are the next and future perspectives of color in image and video processing.

  11. A flexible software architecture for scalable real-time image and video processing applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  12. C-space : Fostering new creative paradigms based on recording and sharing 'casual' videos through the internet

    NARCIS (Netherlands)

    Simoes, Bruno; Aksenov, Petr; Santos, Pedro; Arentze, Theo; De Amicis, Raffaele

    2015-01-01

    A key theme in ubiquitous computing is to create smart environments in which there is seamless integration of people, information, and physical reality. In this manuscript, we describe a set of tools that facilitate the creation of such environments, e,g, a service to transform videos recorded with

  13. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Science.gov (United States)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  14. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    Science.gov (United States)

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  15. Video-based measurements for wireless capsule endoscope tracking

    International Nuclear Information System (INIS)

    Spyrou, Evaggelos; Iakovidis, Dimitris K

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions. (paper)

  16. Video-based measurements for wireless capsule endoscope tracking

    Science.gov (United States)

    Spyrou, Evaggelos; Iakovidis, Dimitris K.

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.

  17. Time-lapse video sysem used to study nesting gyrfalcons

    Science.gov (United States)

    Booms, Travis; Fuller, Mark R.

    2003-01-01

    We used solar-powered time-lapse video photography to document nesting Gyrfalcon (Falco rusticolus) food habits in central West Greenland from May to July in 2000 and 2001. We collected 2677.25 h of videotape from three nests, representing 94, 87, and 49% of the nestling period at each nest. The video recorded 921 deliveries of 832 prey items. We placed 95% of the items into prey categories. The image quality was good but did not reveal enough detail to identify most passerines to species. We found no evidence that Gyrfalcons were negatively affected by the video system after the initial camera set-up. The video system experienced some mechanical problems but proved reliable. The system likely can be used to effectively document the food habits and nesting behavior of other birds, especially those delivering large prey to a nest or other frequently used site.

  18. Artifact reduction of compressed images and video combining adaptive fuzzy filtering and directional anisotropic diffusion

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren; Korhonen, Jari

    2011-01-01

    and ringing artifacts, we have applied directional anisotropic diffusion. Besides that, the selection of the adaptive threshold parameter for the diffusion coefficient has also improved the performance of the algorithm. Experimental results on JPEG compressed images as well as MJPEG and H.264 compressed......Fuzzy filtering is one of the recently developed methods for reducing distortion in compressed images and video. In this paper, we combine the powerful anisotropic diffusion equations with fuzzy filtering in order to reduce the impact of artifacts. Based on the directional nature of the blocking...... videos show improvement in artifact reduction of the proposed algorithm over other directional and spatial fuzzy filters....

  19. Smartphone-based Video of Demodex folliculorum In Biopsied Human Eyelash Follicles.

    Science.gov (United States)

    Vahedi, Mithaq; Davis, Gavin; Coleman, Michael James; Garrett, Brian Steven; Eghrari, Allen Omid

    2015-01-01

    The ability of smartphone technology to document static microscopy images has been well documented and is gaining widespread use in ophthalmology, where slit-lamp biomicroscopy is frequently utilized. However, little has been described regarding the use of smartphone technology to relay video of tissue microscopy results to patients, particularly when a tissue sample integrates motility of organisms as a characteristic feature of the disease. Here, we describe the method to use smartphone video to document motility of Demodex folliculorum in human eyelashes, individual results of which can be shown to patients for education and counseling purposes. The use of smartphone video in documenting the motility of organisms may prove to be beneficial in a variety of medical fields; producers of electronic medical records, therefore, may find it helpful to integrate video drop box tools.

  20. A video-image study of electrolytic flow structure in parallel electric-magnetic fields

    International Nuclear Information System (INIS)

    Gu, Z.H.; Fahidy, T.Z.

    1987-01-01

    The structure of free convective flow propagating from a vertical cathode into the electrolyte bulk has been studied via video-imaging. The enhancing effect of imposed horizontal uniform magnetic fields is manifest by vortex propagation and bifurcating flow

  1. INTEGRATION OF VIDEO IMAGES AND CAD WIREFRAMES FOR 3D OBJECT LOCALIZATION

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2012-07-01

    Full Text Available The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT. Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  2. Enhancements to the Sentinel Fireball Network Video Software

    Science.gov (United States)

    Watson, Wayne

    2009-05-01

    The Sentinel Fireball Network that supports meteor imaging of bright meteors (fireballs) has been in existence for over ten years. Nearly five years ago it moved from gathering meteor data with a camera and VCR video tape to a fisheye lens attached to a hardware device, the Sentinel box, which allowed meteor data to be recorded on a PC operating under real-time Linux. In 2006, that software, sentuser, was made available on Apple, Linux, and Window operating systems using the Python computer language. It provides basic video and management functionality and a small amount of analytic software capability. This paper describes the new and attractive future features of the software, and, additionally, it reviews some of the research and networks from the past and present using video equipment to collect and analyze fireball data that have applicability to sentuser.

  3. Measurement and protocol for evaluating video and still stabilization systems

    Science.gov (United States)

    Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément

    2013-01-01

    This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.

  4. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    Science.gov (United States)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  5. Simultaneous recordings of human microsaccades and drifts with a contemporary video eye tracker and the search coil technique.

    Directory of Open Access Journals (Sweden)

    Michael B McCamy

    Full Text Available Human eyes move continuously, even during visual fixation. These "fixational eye movements" (FEMs include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs.

  6. Social image quality

    Science.gov (United States)

    Qiu, Guoping; Kheiri, Ahmed

    2011-01-01

    Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.

  7. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  8. Multimodal Semantics Extraction from User-Generated Videos

    Directory of Open Access Journals (Sweden)

    Francesco Cricri

    2012-01-01

    Full Text Available User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium, genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.

  9. Video x-ray progressive scanning: new technique for decreasing x-ray exposure without decreasing image quality during cardiac catheterization

    International Nuclear Information System (INIS)

    Holmes, D.R. Jr.; Bove, A.A.; Wondrow, M.A.; Gray, J.E.

    1986-01-01

    A newly developed video x-ray progressive scanning system improves image quality, decreases radiation exposure, and can be added to any pulsed fluoroscopic x-ray system using a video display without major system modifications. With use of progressive video scanning, the radiation entrance exposure rate measured with a vascular phantom was decreased by 32 to 53% in comparison with a conventional fluoroscopic x-ray system. In addition to this substantial decrease in radiation exposure, the quality of the image was improved because of less motion blur and artifact. Progressive video scanning has the potential for widespread application to all pulsed fluoroscopic x-ray systems. Use of this technique should make cardiac catheterization procedures and all other fluoroscopic procedures safer for the patient and the involved medical and paramedical staff

  10. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video-to-video...... information in a probe video, which simultaneously characterizes the kinematics and identity using a motion vector and an identity variable, respectively. The joint posterior distribution of the motion vector and the identity variable is estimated at each time instant and then propagated to the next time...

  11. Toward brain correlates of natural behavior: fMRI during violent video games.

    Science.gov (United States)

    Mathiak, Klaus; Weber, René

    2006-12-01

    Modern video games represent highly advanced virtual reality simulations and often contain virtual violence. In a significant amount of young males, playing video games is a quotidian activity, making it an almost natural behavior. Recordings of brain activation with functional magnetic resonance imaging (fMRI) during gameplay may reflect neuronal correlates of real-life behavior. We recorded 13 experienced gamers (18-26 years; average 14 hrs/week playing) while playing a violent first-person shooter game (a violent computer game played in self-perspective) by means of distortion and dephasing reduced fMRI (3 T; single-shot triple-echo echo-planar imaging [EPI]). Content analysis of the video and sound with 100 ms time resolution achieved relevant behavioral variables. These variables explained significant signal variance across large distributed networks. Occurrence of violent scenes revealed significant neuronal correlates in an event-related design. Activation of dorsal and deactivation of rostral anterior cingulate and amygdala characterized the mid-frontal pattern related to virtual violence. Statistics and effect sizes can be considered large at these areas. Optimized imaging strategies allowed for single-subject and for single-trial analysis with good image quality at basal brain structures. We propose that virtual environments can be used to study neuronal processes involved in semi-naturalistic behavior as determined by content analysis. Importantly, the activation pattern reflects brain-environment interactions rather than stimulus responses as observed in classical experimental designs. We relate our findings to the general discussion on social effects of playing first-person shooter games. (c) 2006 Wiley-Liss, Inc.

  12. Efficient image or video encryption based on spatiotemporal chaos system

    International Nuclear Information System (INIS)

    Lian Shiguo

    2009-01-01

    In this paper, an efficient image/video encryption scheme is constructed based on spatiotemporal chaos system. The chaotic lattices are used to generate pseudorandom sequences and then encrypt image blocks one by one. By iterating chaotic maps for certain times, the generated pseudorandom sequences obtain high initial-value sensitivity and good randomness. The pseudorandom-bits in each lattice are used to encrypt the Direct Current coefficient (DC) and the signs of the Alternating Current coefficients (ACs). Theoretical analysis and experimental results show that the scheme has good cryptographic security and perceptual security, and it does not affect the compression efficiency apparently. These properties make the scheme a suitable choice for practical applications.

  13. Development of a Video Network for Efficient Dissemination of the Graphical Images in a Collaborative Environment.

    Directory of Open Access Journals (Sweden)

    Anatoliy Gordonov

    1999-01-01

    Full Text Available Video distribution inside a local area network can impede or even paralyze normal data transmission activities. The problem can be solved, at least for a while, by compression and by increasing bandwidth, but that solution can become excessively costly or otherwise impractical. Moreover, experience indicates that usage quickly expands to test the limits of bandwidth. In this paper we introduce and analyze the architecture of a Hybrid AnalogDigital Video Network (ADViNet which separates video distribution from standard data handling functions. The network preserves the features of a standard digital network and, in addition, provides efficient real-time full-screen video transmission through a separate analog communication medium. A specially developed control and management protocol is discussed. For all practical purposes ADViNet may be used when graphical images have to be distributed among many nodes of a local area network. It relieves the burden of video distribution and allows users to combine efficient video data transmission with normal regular network activities.

  14. Is it acceptable to video-record palliative care consultations for research and training purposes? A qualitative interview study exploring the views of hospice patients, carers and clinical staff.

    Science.gov (United States)

    Pino, Marco; Parry, Ruth; Feathers, Luke; Faull, Christina

    2017-09-01

    Research using video recordings can advance understanding of healthcare communication and improve care, but making and using video recordings carries risks. To explore views of hospice patients, carers and clinical staff about whether videoing patient-doctor consultations is acceptable for research and training purposes. We used semi-structured group and individual interviews to gather hospice patients, carers and clinical staff views. We used Braun and Clark's thematic analysis. Interviews were conducted at one English hospice to inform the development of a larger video-based study. We invited patients with capacity to consent and whom the care team judged were neither acutely unwell nor severely distressed (11), carers of current or past patients (5), palliative medicine doctors (7), senior nurses (4) and communication skills educators (5). Participants viewed video-based research on communication as valuable because of its potential to improve communication, care and staff training. Video-based research raised concerns including its potential to affect the nature and content of the consultation and threats to confidentiality; however, these were not seen as sufficient grounds for rejecting video-based research. Video-based research was seen as acceptable and useful providing that measures are taken to reduce possible risks across the recruitment, recording and dissemination phases of the research process. Video-based research is an acceptable and worthwhile way of investigating communication in palliative medicine. Situated judgements should be made about when it is appropriate to involve individual patients and carers in video-based research on the basis of their level of vulnerability and ability to freely consent.

  15. Image quality assessment for video stream recognition systems

    Science.gov (United States)

    Chernov, Timofey S.; Razumnuy, Nikita P.; Kozharinov, Alexander S.; Nikolaev, Dmitry P.; Arlazarov, Vladimir V.

    2018-04-01

    Recognition and machine vision systems have long been widely used in many disciplines to automate various processes of life and industry. Input images of optical recognition systems can be subjected to a large number of different distortions, especially in uncontrolled or natural shooting conditions, which leads to unpredictable results of recognition systems, making it impossible to assess their reliability. For this reason, it is necessary to perform quality control of the input data of recognition systems, which is facilitated by modern progress in the field of image quality evaluation. In this paper, we investigate the approach to designing optical recognition systems with built-in input image quality estimation modules and feedback, for which the necessary definitions are introduced and a model for describing such systems is constructed. The efficiency of this approach is illustrated by the example of solving the problem of selecting the best frames for recognition in a video stream for a system with limited resources. Experimental results are presented for the system for identity documents recognition, showing a significant increase in the accuracy and speed of the system under simulated conditions of automatic camera focusing, leading to blurring of frames.

  16. Does sharing the electronic health record in the consultation enhance patient involvement? A mixed-methods study using multichannel video recording and in-depth interviews in primary care.

    Science.gov (United States)

    Milne, Heather; Huby, Guro; Buckingham, Susan; Hayward, James; Sheikh, Aziz; Cresswell, Kathrin; Pinnock, Hilary

    2016-06-01

    Sharing the electronic health-care record (EHR) during consultations has the potential to facilitate patient involvement in their health care, but research about this practice is limited. We used multichannel video recordings to identify examples and examine the practice of screen-sharing within 114 primary care consultations. A subset of 16 consultations was viewed by the general practitioner and/or patient in 26 reflexive interviews. Screen-sharing emerged as a significant theme and was explored further in seven additional patient interviews. Final analysis involved refining themes from interviews and observation of videos to understand how screen-sharing occurred, and its significance to patients and professionals. Eighteen (16%) of 114 videoed consultations involved instances of screen-sharing. Screen-sharing occurred in six of the subset of 16 consultations with interviews and was a significant theme in 19 of 26 interviews. The screen was shared in three ways: 'convincing' the patient of a diagnosis or treatment; 'translating' between medical and lay understandings of disease/medication; and by patients 'verifying' the accuracy of the EHR. However, patients and most GPs perceived the screen as the doctor's domain, not to be routinely viewed by the patient. Screen-sharing can facilitate patient involvement in the consultation, depending on the way in which sharing comes about, but the perception that the record belongs to the doctor is a barrier. To exploit the potential of sharing the screen to promote patient involvement, there is a need to reconceptualise and redesign the EHR. © 2014 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  17. Nursing students' self-evaluation using a video recording of foley catheterization: effects on students' competence, communication skills, and learning motivation.

    Science.gov (United States)

    Yoo, Moon Sook; Yoo, Il Young; Lee, Hyejung

    2010-07-01

    An opportunity for a student to evaluate his or her own performance enhances self-awareness and promotes self-directed learning. Using three outcome measures of competency of procedure, communication skills, and learning motivation, the effects of self-evaluation using a video recording of the student's Foley catheterization was investigated in this study. The students in the experimental group (n = 20) evaluated their Foley catheterization performance by reviewing the video recordings of their own performance, whereas students in the control group (n = 20) received written evaluation guidelines only. The results showed that the students in the experimental group had better scores on competency (p communication skills (p performance developed by reviewing a videotape appears to increase the competency of clinical skills in nursing students. Copyright 2010, SLACK Incorporated.

  18. Technology Insight: current status of video capsule endoscopy.

    Science.gov (United States)

    Cave, David R

    2006-03-01

    Video capsule endoscopy (VCE) is the most recent major practical and conceptual development in the field of endoscopy. The video capsule endoscope-a small, pill-sized, passive imaging device-has been demonstrated to be the pre-eminent imaging device for disorders of the small intestine. The initial use for VCE was to detect the origin of obscure gastrointestinal bleeding. Several other indications have now been justified, or are in the process of evaluation. More than 200,000 of these disposable devices have been used worldwide, with an extraordinarily good safety record: indeed, the device has been approved for use in children as young as 10 years of age. In addition, a double-ended capsule has now been approved for the evaluation of mucosal disease in the esophagus. The now-widespread deployment of the device into gastrointestinal practice in the US and many other countries suggests that VCE has achieved mainstream utility. The development of similar competitor devices, and devices whose movement can be controlled, is in progress.

  19. Classifying Normal and Abnormal Status Based on Video Recordings of Epileptic Patients

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-01-01

    Full Text Available Based on video recordings of the movement of the patients with epilepsy, this paper proposed a human action recognition scheme to detect distinct motion patterns and to distinguish the normal status from the abnormal status of epileptic patients. The scheme first extracts local features and holistic features, which are complementary to each other. Afterwards, a support vector machine is applied to classification. Based on the experimental results, this scheme obtains a satisfactory classification result and provides a fundamental analysis towards the human-robot interaction with socially assistive robots in caring the patients with epilepsy (or other patients with brain disorders in order to protect them from injury.

  20. Privacy information management for video surveillance

    Science.gov (United States)

    Luo, Ying; Cheung, Sen-ching S.

    2013-05-01

    The widespread deployment of surveillance cameras has raised serious privacy concerns. Many privacy-enhancing schemes have been proposed to automatically redact images of trusted individuals in the surveillance video. To identify these individuals for protection, the most reliable approach is to use biometric signals such as iris patterns as they are immutable and highly discriminative. In this paper, we propose a privacy data management system to be used in a privacy-aware video surveillance system. The privacy status of a subject is anonymously determined based on her iris pattern. For a trusted subject, the surveillance video is redacted and the original imagery is considered to be the privacy information. Our proposed system allows a subject to access her privacy information via the same biometric signal for privacy status determination. Two secure protocols, one for privacy information encryption and the other for privacy information retrieval are proposed. Error control coding is used to cope with the variability in iris patterns and efficient implementation is achieved using surrogate data records. Experimental results on a public iris biometric database demonstrate the validity of our framework.

  1. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  2. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  3. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  4. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  5. Degraded visual environment image/video quality metrics

    Science.gov (United States)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  6. Simultaneous recordings of ocular microtremor and microsaccades with a piezoelectric sensor and a video-oculography system

    Directory of Open Access Journals (Sweden)

    Michael B. McCamy

    2013-02-01

    Full Text Available Our eyes are in continuous motion. Even when we attempt to fix our gaze, we produce so called “fixational eye movements”, which include microsaccades, drift, and ocular microtremor (OMT. Microsaccades, the largest and fastest type of fixational eye movement, shift the retinal image from several dozen to several hundred photoreceptors and have equivalent physical characteristics to saccades, only on a smaller scale (Martinez-Conde, Otero-Millan & Macknik, 2013. OMT occurs simultaneously with drift and is the smallest of the fixational eye movements (∼1 photoreceptor width, >0.5 arcmin, with dominant frequencies ranging from 70 Hz to 103 Hz (Martinez-Conde, Macknik & Hubel, 2004. Due to OMT’s small amplitude and high frequency, the most accurate and stringent way to record it is the piezoelectric transduction method. Thus, OMT studies are far rarer than those focusing on microsaccades or drift. Here we conducted simultaneous recordings of OMT and microsaccades with a piezoelectric device and a commercial infrared video tracking system. We set out to determine whether OMT could help to restore perceptually faded targets during attempted fixation, and we also wondered whether the piezoelectric sensor could affect the characteristics of microsaccades. Our results showed that microsaccades, but not OMT, counteracted perceptual fading. We moreover found that the piezoelectric sensor affected microsaccades in a complex way, and that the oculomotor system adjusted to the stress brought on by the sensor by adjusting the magnitudes of microsaccades.

  7. Video-rate or high-precision: a flexible range imaging camera

    Science.gov (United States)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  8. Video manometry of the sphincter of Oddi: a new aid for interpreting manometric tracings and excluding manometric artefacts

    DEFF Research Database (Denmark)

    Madácsy, L; Middelfart, H V; Matzen, Peter

    2000-01-01

    was to develop a new method sphincter of Oddi video manometry-based on simultaneous ESOM and real-time endoscopic image analysis, and to investigate the usefulness of video manometry for detecting manometric artefacts during ESOM. PATIENTS AND METHODS: Seven consecutive patients who had undergone cholecystectomy...... and were referred with a suspicion of sphincter of Oddi dysfunction were investigated. Sphincter of Oddi pressure and endoscopic images (20 frames/s) were recorded simultaneously on a Synectics PC Polygraf computer system with a time-correlated basis, and then compared. RESULTS: On ESOM, 69 sphincter......, or retching, were also easily recognized using simultaneous ESOM and real-time endoscopic image analysis. CONCLUSIONS: Video manometry of the sphincter of Oddi is a promising new method for improving the analysis and documentation of ESOM tracings. It has several advantages over the conventional technique...

  9. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  10. Non-technical skills for obstetricians conducting forceps and vacuum deliveries: qualitative analysis by interviews and video recordings.

    Science.gov (United States)

    Bahl, Rachna; Murphy, Deirdre J; Strachan, Bryony

    2010-06-01

    Non-technical skills are cognitive and social skills required in an operational task. These skills have been identified and taught in the surgical domain but are of particular relevance to obstetrics where the patient is awake, the partner is present and the clinical circumstances are acute and often stressful. The aim of this study was to define the non-technical skills of an operative vaginal delivery (forceps or vacuum) to facilitate transfer of skills from expert obstetricians to trainee obstetricians. Qualitative study using interviews and video recordings. The study was conducted at two university teaching hospitals (St. Michael's Hospital, Bristol and Ninewells Hospital, Dundee). Participants included 10 obstetricians and eight midwives identified as experts in conducting or supporting operative vaginal deliveries. Semi-structured interviews were carried out using routine clinical scenarios. The experts were also video recorded conducting forceps and vacuum deliveries in a simulation setting. The interviews and video recordings were transcribed verbatim and analysed using thematic coding. The anonymised data were independently coded by the three researchers and then compared for consistency of interpretation. The experts reviewed the coded data for respondent validation and clarification. The themes that emerged were used to identify the non-technical skills required for conducting an operative vaginal delivery. The final skills list was classified into seven main categories. Four categories (situational awareness, decision making, task management, and team work and communication) were similar to the categories identified in surgery. Three further categories unique to obstetrics were also identified (professional relationship with the woman, maintaining professional behaviour and cross-monitoring of performance). This explicitly defined skills taxonomy could aid trainees' understanding of the non-technical skills to be considered when conducting an operative

  11. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  12. Videos and images from 25 years of teaching compressible flow

    Science.gov (United States)

    Settles, Gary

    2008-11-01

    Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.

  13. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  14. Investigating interactional competence using video recordings in ESL classrooms to enhance communication

    Science.gov (United States)

    Krishnasamy, Hariharan N.

    2016-08-01

    Interactional competence, or knowing and using the appropriate skills for interaction in various communication situations within a given speech community and culture is important in the field of business and professional communication [1], [2]. Similar to many developing countries in the world, Malaysia is a growing economy and undergraduates will have to acquire appropriate communication skills. In this study, two aspects of the interactional communicative competence were investigated, that is the linguistic and paralinguistic behaviors in small group communication as well as conflict management in small group communication. Two groups of student participants were given a problem-solving task based on a letter of complaint. The two groups of students were video recorded during class hours for 40 minutes. The videos and transcription of the group discussions were analyzed to examine the use of language and interaction in small groups. The analysis, findings and interpretations were verified with three lecturers in the field of communication. The results showed that students were able to accomplish the given task using verbal and nonverbal communication. However, participation was unevenly distributed with two students talking for less than a minute. Negotiation was based more on alternative views and consensus was easily achieved. In concluding, suggestions are given on ways to improve English language communication.

  15. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  16. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution – an application in higher education

    NARCIS (Netherlands)

    Jan Kuijten; Ajda Ortac; Hans Maier; Gert de Heer

    2015-01-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels).

  17. Image processing system for videotape review

    International Nuclear Information System (INIS)

    Bettendroffer, E.

    1988-01-01

    In a nuclear plant, the areas in which fissile materials are stored or handled, have to be monitored continuously. One method of surveillance is to record pictures of TV cameras with determined time intervals on special video recorders. The 'time lapse' recorded tape is played back at normal speed and an inspector checks visually the pictures. This method requires much manpower and an automated method would be useful. The present report describes an automatic reviewing method based on an image processing system; the system detects scene changes in the picture sequence and stores the reduced data set on a separate video tape. The resulting reduction of reviewing time by inspector is important for surveillance data with few movements

  18. Comparative study of digital laser film and analog paper image recordings

    International Nuclear Information System (INIS)

    Lee, K.R.; Cox, G.G.; Templeton, A.W.; Preston, D.F.; Anderson, W.H.; Hensley, K.S.; Dwyer, S.J.

    1987-01-01

    The increase in the use of various imaging modalities demands higher quality and more efficacious analog image recordings. Laser electronic recordings with digital array prints of 4,000 x 5,000 x 12 bits obtained using laser-sensitive film or paper are being evaluated. Dry silver paper recordings are being improved and evaluated. High-resolution paper dot printers are being studied to determine their gray-scale capabilities. The authors evaluated the image quality, costs, clinical utilization, and acceptability of CT scans, MR images, digital subtraction angiograms, digital radiographs, and radionuclide scans recorded by seven different printers (three laser, three silver paper, and one dot) and compared the same features in conventional film recording. This exhibit outlines the technical developments and instrumentation of digital laser film and analog paper recorders and presents the results of the study

  19. Development of an integrated filing system for endoscopic images.

    Science.gov (United States)

    Fujino, M A; Ikeda, M; Yamamoto, Y; Kinose, T; Tachikawa, H; Morozumi, A; Sano, S; Kojima, Y; Nakamura, T; Kawai, T

    1991-01-01

    A new integrated filing system for endoscopic images has been developed, comprising a main image filing system and subsystems located at different stations. A hybrid filing system made up of both digital and analog filing devices was introduced to construct this system that combines the merits of the two filing methods. Each subsystem provided with a video processor, is equipped with a digital filing device, and routine images were recorded in the analog image filing device of the main system. The use of a multi-input adapter enabled simultaneous input of analog images from up to 8 video processors. Recorded magneto-optical disks make it possible to recall the digital images at any station in the hospital; the disks are copied without image degradation and also utilised for image processing. This system promises reliable storage and integrated, efficient management of endoscopic information. It also costs less to install than the so-called PACS (picture archiving and communication system), which connects all the stations of the hospital using optical fiber cables.

  20. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol.

    Science.gov (United States)

    Harford, Mirae; Catherall, Jacqueline; Gerry, Stephen; Young, Duncan; Watkinson, Peter

    2017-10-25

    For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. PROSPERO CRD42016029167.

  1. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol

    Directory of Open Access Journals (Sweden)

    Mirae Harford

    2017-10-01

    Full Text Available Abstract Background For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. Methods We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. Discussion To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. Systematic review registration PROSPERO CRD42016029167

  2. Reliability of video-based identification of footstrike pattern and video time frame at initial contact in recreational runners

    DEFF Research Database (Denmark)

    Damsted, Camma; Larsen, L H; Nielsen, R.O.

    2015-01-01

    and video time frame at initial contact during treadmill running using two-dimensional (2D) video recordings. METHODS: Thirty-one recreational runners were recorded twice, 1 week apart, with a high-speed video camera. Two blinded raters evaluated each video twice with an interval of at least 14 days....... RESULTS: Kappa values for within-day identification of footstrike pattern revealed intra-rater agreement of 0.83-0.88 and inter-rater agreement of 0.50-0.63. Corresponding figures for between-day identification of footstrike pattern were 0.63-0.69 and 0.41-0.53, respectively. Identification of video time...... in 36% of the identifications (kappa=0.41). The 95% limits of agreement for identification of video time frame at initial contact may, at times, allow for different identification of footstrike pattern. Clinicians should, therefore, be encouraged to continue using clinical 2D video setups for intra...

  3. Image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  4. Video-rate resonant scanning multiphoton microscopy: An emerging technique for intravital imaging of the tumor microenvironment.

    Science.gov (United States)

    Kirkpatrick, Nathaniel D; Chung, Euiheon; Cook, Daniel C; Han, Xiaoxing; Gruionu, Gabriel; Liao, Shan; Munn, Lance L; Padera, Timothy P; Fukumura, Dai; Jain, Rakesh K

    2012-01-01

    The abnormal tumor microenvironment fuels tumor progression, metastasis, immune suppression, and treatment resistance. Over last several decades, developments in and applications of intravital microscopy have provided unprecedented insights into the dynamics of the tumor microenvironment. In particular, intravital multiphoton microscopy has revealed the abnormal structure and function of tumor-associated blood and lymphatic vessels, the role of aberrant tumor matrix in drug delivery, invasion and metastasis of tumor cells, the dynamics of immune cell trafficking to and within tumors, and gene expression in tumors. However, traditional multiphoton microscopy suffers from inherently slow imaging rates-only a few frames per second, thus unable to capture more rapid events such as blood flow, lymphatic flow, and cell movement within vessels. Here, we report the development and implementation of a video-rate multiphoton microscope (VR-MPLSM) based on resonant galvanometer mirror scanning that is capable of recording at 30 frames per second and acquiring intravital multispectral images. We show that the design of the system can be readily implemented and is adaptable to various experimental models. As examples, we demonstrate the utility of the system to directly measure flow within tumors, capture metastatic cancer cells moving within the brain vasculature and cells in lymphatic vessels, and image acute responses to changes in a vascular network. VR-MPLSM thus has the potential to further advance intravital imaging and provide new insight into the biology of the tumor microenvironment.

  5. [Telemedicine with digital video transport system].

    Science.gov (United States)

    Hahm, Joon Soo; Shimizu, Shuji; Nakashima, Naoki; Byun, Tae Jun; Lee, Hang Lak; Choi, Ho Soon; Ko, Yong; Lee, Kyeong Geun; Kim, Sun Il; Kim, Tae Eun; Yun, Jiwon; Park, Yong Jin

    2004-06-01

    The growth of technology based on internet protocol has affected on the informatics and automatic controls of medical fields. The aim of this study was to establish the telemedical educational system by developing the high quality image transfer using the DVTS (digital video transmission system) on the high-speed internet network. Using telemedicine, we were able to send surgical images not only to domestic areas but also to international area. Moreover, we could discuss the condition of surgical procedures in the operation room and seminar room. The Korean-Japan cable network (KJCN) was structured in the submarine between Busan and Fukuoka. On the other hand, the Korea advanced research network (KOREN) was used to connect between Busan and Seoul. To link the image between the Hanyang University Hospital in Seoul and Kyushu University Hospital in Japan, we started teleconference system and recorded image-streaming system with DVTS on the circumstance with IPv4 network. Two operative cases were transmitted successfully. We could keep enough bandwidth of 60 Mbps for two-line transmission. The quality of transmitted moving image had no frame loss with the rate 30 per second. The sound was also clear and the time delay was less than 0.3 sec. Our study has demonstrated the feasibility of domestic and international telemedicine. We have established an international medical network with high-quality video transmission over internet protocol. It is easy to perform, reliable, and also economical. Thus, it will be a promising tool in remote medicine for worldwide telemedical communication in the future.

  6. Shifting Weights: Adapting Object Detectors from Image to Video (Author’s Manuscript)

    Science.gov (United States)

    2012-12-08

    Skateboard Sewing Machine Sandwich Figure 1: Images of the “ Skateboard ”, “Sewing machine”, and “Sandwich” classes taken from (top row) ImageNet [7...InitialBL VideoPosBL Our method(nt) Our method(full) Gopalan et al. [18] (PLS) Gopalan et al. [18] (SVM) Skateboard 4.29% 2.89% 10.44% 10.44% 0.04% 0.94...belongs to no event class. We select 6 object classes to learn object detectors for because they are commonly present in selected events: “ Skateboard

  7. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  8. Data and videos for ultrafast synchrotron X-ray imaging studies of metal solidification under ultrasound

    Directory of Open Access Journals (Sweden)

    Bing Wang

    2018-04-01

    Full Text Available The data presented in this article are related to the paper entitled ‘Ultrafast synchrotron X-ray imaging studies of microstructure fragmentation in solidification under ultrasound’ [Wang et al., Acta Mater. 144 (2018 505-515]. This data article provides further supporting information and analytical methods, including the data from both experimental and numerical simulation, as well as the Matlab code for processing the X-ray images. Six videos constructed from the processed synchrotron X-ray images are also provided.

  9. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  10. Individualized music played for agitated patients with dementia: analysis of video-recorded sessions.

    Science.gov (United States)

    Ragneskog, H; Asplund, K; Kihlgren, M; Norberg, A

    2001-06-01

    Many nursing home patients with dementia suffer from symptoms of agitation (e.g. anxiety, shouting, irritability). This study investigated whether individualized music could be used as a nursing intervention to reduce such symptoms in four patients with severe dementia. The patients were video-recorded during four sessions in four periods, including a control period without music, two periods where individualized music was played, and one period where classical music was played. The recordings were analysed by systematic observations and the Facial Action Coding System. Two patients became calmer during some of the individualized music sessions; one patient remained sitting in her armchair longer, and the other patient stopped shouting. For the two patients who were most affected by dementia, the noticeable effect of music was minimal. If the nursing staff succeed in discovering the music preferences of an individual, individualized music may be an effective nursing intervention to mitigate anxiety and agitation for some patients.

  11. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    Science.gov (United States)

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.

  12. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  13. EFFICIENT USE OF VIDEO FOR 3D MODELLING OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    B. Alsadik

    2015-03-01

    Full Text Available Currently, there is a rapid development in the techniques of the automated image based modelling (IBM, especially in advanced structure-from-motion (SFM and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 – 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  14. Consumer-based technology for distribution of surgical videos for objective evaluation.

    Science.gov (United States)

    Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K

    2012-08-01

    The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.

  15. Scratch's Third Body: Video Talks Back to Television

    Directory of Open Access Journals (Sweden)

    Leo Goldsmith

    2015-12-01

    Full Text Available Emerging in the UK in the 1980s, Scratch Video established a paradoxical union of mass-media critique, Left-wing politics, and music-video and advertising aesthetics with its use of moving-image appropriation in the medium of videotape. Enabled by innovative professional and consumer video technologies, artists like George Barber, The Gorilla Tapes, and Sandra Goldbacher and Kim Flitcroft deployed a style characterized by the rapid sampling and manipulation of dissociated images drawn from broadcast television. Inspired by the cut-up methods of William Burroughs and the audio sampling practiced by contemporary black American musicians, these artists developed strategies for intervening in the audiovisual archive of television and disseminating its images in new contexts: in galleries and nightclubs, and on home video. Reconceptualizing video's “body,” Scratch's appropriation of televisual images of the human form imagined a new hybrid image of the post-industrial body, a “third body” representing a new convergence of human and machine.

  16. Recording and Validation of Audiovisual Expressions by Faces and Voices

    Directory of Open Access Journals (Sweden)

    Sachiko Takagi

    2011-10-01

    Full Text Available This study aims to further examine the cross-cultural differences in multisensory emotion perception between Western and East Asian people. In this study, we recorded the audiovisual stimulus video of Japanese and Dutch actors saying neutral phrase with one of the basic emotions. Then we conducted a validation experiment of the stimuli. In the first part (facial expression, participants watched a silent video of actors and judged what kind of emotion the actor is expressing by choosing among 6 options (ie, happiness, anger, disgust, sadness, surprise, and fear. In the second part (vocal expression, they listened to the audio part of the same videos without video images while the task was the same. We analyzed their categorization responses based on accuracy and confusion matrix and created a controlled audiovisual stimulus set.

  17. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    less vertical motion. The exceptions are videos from the classes of biking (mainly due to the camera tracking fast bikers), jumping on a trampoline ...tracking the bikers; the jumping videos, featuring people on trampolines , the swing videos, which are usually recorded in profile view, and the walking

  18. Medical students' perceptions of video-linked lectures and video-streaming

    Directory of Open Access Journals (Sweden)

    Karen Mattick

    2010-12-01

    Full Text Available Video-linked lectures allow healthcare students across multiple sites, and between university and hospital bases, to come together for the purposes of shared teaching. Recording and streaming video-linked lectures allows students to view them at a later date and provides an additional resource to support student learning. As part of a UK Higher Education Academy-funded Pathfinder project, this study explored medical students' perceptions of video-linked lectures and video-streaming, and their impact on learning. The methodology involved semi-structured interviews with 20 undergraduate medical students across four sites and five year groups. Several key themes emerged from the analysis. Students generally preferred live lectures at the home site and saw interaction between sites as a major challenge. Students reported that their attendance at live lectures was not affected by the availability of streamed lectures and tended to be influenced more by the topic and speaker than the technical arrangements. These findings will inform other educators interested in employing similar video technologies in their teaching.Keywords: video-linked lecture; video-streaming; student perceptions; decisionmaking; cross-campus teaching.

  19. Processing Decoded Video for LCD-LED Backlight Display

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan

    The quality of digital images and video signal on visual media such as TV screens and LCD displays is affected by two main factors; the display technology and compression standards. Accurate knowledge about the characteristics of display and the video signal can be utilized to develop advanced...... on local LED-LCD backlight. Second, removing the digital video codec artifacts such as blocking and ringing artifacts by post-processing algorithms. A novel algorithm based on image features with optimal balance between visual quality and power consumption was developed. In addition, to remove flickering...... algorithms for signal (image or video) enhancement. One particular application of such algorithms is the case of LCDs with dynamic local backlight. The thesis addressed two main problems; first, designing algorithms that improve the visual quality of perceived image and video and reduce power consumption...

  20. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    OpenAIRE

    Tominaga Shoji; Plataniotis KonstantinosN; Trémeau Alain

    2008-01-01

    Abstract The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the mos...

  1. Fluorescence imaging to quantify crop residue cover

    Science.gov (United States)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  2. Computer simulation of radiographic images sharpness in several system of image record

    International Nuclear Information System (INIS)

    Silva, Marcia Aparecida; Schiable, Homero; Frere, Annie France; Marques, Paulo M.A.; Oliveira, Henrique J.Q. de; Alves, Fatima F.R.; Medeiros, Regina B.

    1996-01-01

    A method to predict the influence of the record system on radiographic images sharpness by computer simulation is studied. The method intend to previously show the image to be obtained for each type of film or screen-film combination used during the exposure

  3. Toward enhancing the distributed video coder under a multiview video codec framework

    Science.gov (United States)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  4. MO-A-BRD-06: In Vivo Cherenkov Video Imaging to Verify Whole Breast Irradiation Treatment

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, R; Glaser, A [Dartmouth College, Hanover, NH - New Hampshire (United States); Jarvis, L [Dartmouth-Hitchcock Medical Center, City Of Lebanon, New Hampshire (United States); Gladstone, D [Dartmouth-Hitchcock Medical Center, Hanover, City of Lebanon (Lebanon); Andreozzi, J; Hitchcock, W; Pogue, B [Dartmouth College, Hanover, NH (United States)

    2014-06-15

    Purpose: To show in vivo video imaging of Cherenkov emission (Cherenkoscopy) can be acquired in the clinical treatment room without affecting the normal process of external beam radiation therapy (EBRT). Applications of Cherenkoscopy, such as patient positioning, movement tracking, treatment monitoring and superficial dose estimation, were examined. Methods: In a phase 1 clinical trial, including 12 patients undergoing post-lumpectomy whole breast irradiation, Cherenkov emission was imaged with a time-gated ICCD camera synchronized to the radiation pulses, during 10 fractions of the treatment. Images from different treatment days were compared by calculating the 2-D correlations corresponding to the averaged image. An edge detection algorithm was utilized to highlight biological features, such as the blood vessels. Superficial dose deposited at the sampling depth were derived from the Eclipse treatment planning system (TPS) and compared with the Cherenkov images. Skin reactions were graded weekly according to the Common Toxicity Criteria and digital photographs were obtained for comparison. Results: Real time (fps = 4.8) imaging of Cherenkov emission was feasible and feasibility tests indicated that it could be improved to video rate (fps = 30) with system improvements. Dynamic field changes due to fast MLC motion were imaged in real time. The average 2-D correlation was about 0.99, suggesting the stability of this imaging technique and repeatability of patient positioning was outstanding. Edge enhanced images of blood vessels were observed, and could serve as unique biological markers for patient positioning and movement tracking (breathing). Small discrepancies exists between the Cherenkov images and the superficial dose predicted from the TPS but the former agreed better with actual skin reactions than did the latter. Conclusion: Real time Cherenkoscopy imaging during EBRT is a novel imaging tool that could be utilized for patient positioning, movement tracking

  5. An Efficient Fractal Video Sequences Codec with Multiviews

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2013-01-01

    Full Text Available Multiview video consists of multiple views of the same scene. They require enormous amount of data to achieve high image quality, which makes it indispensable to compress multiview video. Therefore, data compression is a major issue for multiviews. In this paper, we explore an efficient fractal video codec to compress multiviews. The proposed scheme first compresses a view-dependent geometry of the base view using fractal video encoder with homogeneous region condition. With the extended fractional pel motion estimation algorithm and fast disparity estimation algorithm, it then generates prediction images of other views. The prediction image uses the image-based rendering techniques based on the decoded video. And the residual signals are obtained by the prediction image and the original image. Finally, it encodes residual signals by the fractal video encoder. The idea is also to exploit the statistical dependencies from both temporal and interview reference pictures for motion compensated prediction. Experimental results show that the proposed algorithm is consistently better than JMVC8.5, with 62.25% bit rate decrease and 0.37 dB PSNR increase based on the Bjontegaard metric, and the total encoding time (TET of the proposed algorithm is reduced by 92%.

  6. Development of P4140 video data wall projector; Video data wall projector

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, H.; Inoue, H. [Toshiba Corp., Tokyo (Japan)

    1998-12-01

    The P4140 is a 3 cathode-ray tube (CRT) video data wall projector for super video graphics array (SVGA) signals. It is used as an image display unit, providing a large screen when several sets are put together. A high-quality picture has been realized by higher resolution and improved color uniformity technology. A new convergence adjustment system has also been developed through the optimal combination of digital and analog technologies. This video data wall installation has been greatly enhanced by the automation of cubes and cube performance settings. The P4140 video data wall projector can be used for displaying not only data but video as well. (author)

  7. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD......) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation in the CIE L*a*b* color space. The metric takes luminance reduction, color distortion and loss...

  8. A digital video tracking system

    Science.gov (United States)

    Giles, M. K.

    1980-01-01

    The Real-Time Videotheodolite (RTV) was developed in connection with the requirement to replace film as a recording medium to obtain the real-time location of an object in the field-of-view (FOV) of a long focal length theodolite. Design philosophy called for a system capable of discriminatory judgment in identifying the object to be tracked with 60 independent observations per second, capable of locating the center of mass of the object projection on the image plane within about 2% of the FOV in rapidly changing background/foreground situations, and able to generate a predicted observation angle for the next observation. A description is given of a number of subsystems of the RTV, taking into account the processor configuration, the video processor, the projection processor, the tracker processor, the control processor, and the optics interface and imaging subsystem.

  9. Research on compression performance of ultrahigh-definition videos

    Science.gov (United States)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  10. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  11. Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.

    Science.gov (United States)

    Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys

    2018-04-01

    Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.

  12. Development of fast video recording of plasma interaction with a lithium limiter on T-11M tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Lazarev, V.B., E-mail: v_lazarev@triniti.ru [SSC RF TRINITI Troitsk, Moscow (Russian Federation); Dzhurik, A.S.; Shcherbak, A.N. [SSC RF TRINITI Troitsk, Moscow (Russian Federation); Belov, A.M. [NRC “Kurchatov Institute”, Moscow (Russian Federation)

    2016-11-15

    Highlights: • The paper presents the results of the study of tokamak plasma interaction with lithium capillary-porous system limiters and PFC by high-speed color camera. • Registration of emission near the target in SOL in neutral lithium light and e-folding length for neutral Lithium measurements. • Registration of effect of MHD instabilities on CPS Lithium limiter. • A sequence of frames shows evolution of lithium bubble on the surface of lithium limiter. • View of filament structure near the plasma edge in ohmic mode. - Abstract: A new high-speed color camera with interference filters was installed for fast video recording of plasma-surface interaction with a Lithium limiter on the base of capillary-porous system (CPS) in T-11M tokamak vessel. The paper presents the results of the study of tokamak plasma interaction (frame exposure time up to 4 μs) with CPS Lithium limiter in a stable stationary phase, unstable regimes with internal disruption and results of processing of the image of the light emission around the probe, i.e. e-folding length for neutral Lithium penetration and e-folding length for Lithium ion flux in SOL region.

  13. Modular integrated video system (MIVS) review station

    International Nuclear Information System (INIS)

    Garcia, M.L.

    1988-01-01

    An unattended video surveillance unit, the Modular Integrated Video System (MIVS), has been developed by Sandia National Laboratories for International Safeguards use. An important support element of this system is a semi-automatic Review Station. Four component modules, including an 8 mm video tape recorder, a 4-inch video monitor, a power supply and control electronics utilizing a liquid crystal display (LCD) are mounted in a suitcase for probability. The unit communicates through the interactive, menu-driven LCD and may be operated on facility power through the world. During surveillance, the MIVS records video information at specified time intervals, while also inserting consecutive scene numbers and tamper event information. Using either of two available modes of operation, the Review Station reads the inserted information and counts the number of missed scenes and/or tamper events encountered on the tapes, and reports this to the user on the LCD. At the end of a review session, the system will summarize the results of the review, stop the recorder, and advise the user of the completion of the review. In addition, the Review Station will check for any video loss on the tape

  14. Enhance Video Film using Retnix method

    Science.gov (United States)

    Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.

    2018-05-01

    An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.

  15. WT Bird. Bird collision recording for offshore wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Wiggelinkhuizen, E.J.; Rademakers, L.W.M.M.; Barhorst, S.A.M. [ECN Wind Energy, Petten (Netherlands); Den Boon, H. [E-Connection Project, Bunnik (Netherlands); Dirksen, S. [Bureau Waardenburg, Culemborg (Netherlands); Schekkerman, H. [Alterra, Wageningen (Netherlands)

    2004-11-01

    A new method for monitoring of bird collisions has been developed using video and audio registrations that are triggered by sound and vibration measurements. Remote access to the recorded images and sounds makes it possible to count the number of collisions as well as to identify the species. After the successful proof of principle and evaluation on small land-based turbines the system is now being designed for offshore wind farms. Currently the triggering system and video and audio registration are being tested on large land-based wind turbines using bird dummies. Tests of three complete prototype systems are planned for 2005.

  16. Video motion detection for physical security applications

    International Nuclear Information System (INIS)

    Matter, J.C.

    1990-01-01

    Physical security specialists have been attracted to the concept of video motion detection for several years. Claimed potential advantages included additional benefit from existing video surveillance systems, automatic detection, improved performance compared to human observers, and cost-effectiveness. In recent years, significant advances in image-processing dedicated hardware and image analysis algorithms and software have accelerated the successful application of video motion detection systems to a variety of physical security applications. Early video motion detectors (VMDs) were useful for interior applications of volumetric sensing. Success depended on having a relatively well-controlled environment. Attempts to use these systems outdoors frequently resulted in an unacceptable number of nuisance alarms. Currently, Sandia National Laboratories (SNL) is developing several advanced systems that employ image-processing techniques for a broader set of safeguards and security applications. The Target Cueing and Tracking System (TCATS), the Video Imaging System for Detection, Tracking, and Assessment (VISDTA), the Linear Infrared Scanning Array (LISA); the Mobile Intrusion Detection and Assessment System (MIDAS), and the Visual Artificially Intelligent Surveillance (VAIS) systems are described briefly

  17. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  18. Cryptanalysis of a spatiotemporal chaotic image/video cryptosystem

    International Nuclear Information System (INIS)

    Rhouma, Rhouma; Belghith, Safya

    2008-01-01

    This Letter proposes two different attacks on a recently proposed chaotic cryptosystem for images and videos in [S. Lian, Chaos Solitons Fractals (2007), (doi: 10.1016/j.chaos.2007.10.054)]. The cryptosystem under study displays weakness in the generation of the keystream. The encryption is made by generating a keystream mixed with blocks generated from the plaintext and the ciphertext in a CBC mode design. The so obtained keystream remains unchanged for every encryption procedure. Guessing the keystream leads to guessing the key. Two possible attacks are then able to break the whole cryptosystem based on this drawback in generating the keystream. We propose also to change the description of the cryptosystem to be robust against the described attacks by making it in a PCBC mode design

  19. Researching on the process of remote sensing video imagery

    Science.gov (United States)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  20. A Primer on Endoscopic Electronic Medical Records

    OpenAIRE

    Atreja, Ashish; Rizk, Maged; Gurland, Brooke

    2010-01-01

    Endoscopic electronic medical record systems (EEMRs) are now increasingly utilized in many endoscopy centers. Modern EEMRs not only support endoscopy report generation, but often include features such as practice management tools, image and video clip management, inventory management, e-faxes to referring physicians, and database support to measure quality and patient outcomes. There are many existing software vendors offering EEMRs, and choosing a software vendor can be time consuming and co...

  1. Video Surveillance: Privacy Issues and Legal Compliance

    DEFF Research Database (Denmark)

    Mahmood Rajpoot, Qasim; Jensen, Christian D.

    2015-01-01

    Pervasive usage of video surveillance is rapidly increasing in developed countries. Continuous security threats to public safety demand use of such systems. Contemporary video surveillance systems offer advanced functionalities which threaten the privacy of those recorded in the video. There is a...

  2. Energy use of televisions and video cassette recorders in the U.S.

    Energy Technology Data Exchange (ETDEWEB)

    Meier, Alan; Rosen, Karen

    1999-03-01

    In an effort to more accurately determine nationwide energy consumption, the U.S. Department of Energy has recently commissioned studies with the goal of improving its understanding of the energy use of appliances in the miscellaneous end-use category. This study presents an estimate of the residential energy consumption of two of the most common domestic appliances in the miscellaneous end-use category: color televisions (TVs) and video cassette recorders (VCRs). The authors used a bottom-up approach in estimating national TV and VCR energy consumption. First, they obtained estimates of stock and usage from national surveys, while TV and VCR power measurements and other data were recorded at repair and retail shops. Industry-supplied shipment and sales distributions were then used to minimize bias in the power measurement samples. To estimate national TV and VCR energy consumption values, ranges of power draw and mode usage were created to represent situations in homes with more than one unit. Average energy use values for homes with one unit, two units, etc. were calculated and summed to provide estimates of total national TV and VCR energy consumption.

  3. High speed video recording system on a chip for detonation jet engine testing

    Directory of Open Access Journals (Sweden)

    Samsonov Alexander N.

    2018-01-01

    Full Text Available This article describes system on a chip development for high speed video recording purposes. Current research was started due to difficulties in selection of FPGAs and CPUs which include wide bandwidth, high speed and high number of multipliers for real time signal analysis implementation. Current trend of high density silicon device integration will result soon in a hybrid sensor-controller-memory circuit packed in a single chip. This research was the first step in a series of experiments in manufacturing of hybrid devices. The current task is high level syntheses of high speed logic and CPU core in an FPGA. The work resulted in FPGA-based prototype implementation and examination.

  4. Record Desktop Activity as Streaming Videos for Asynchronous, Video-Based Collaborative Learning.

    Science.gov (United States)

    Chang, Chih-Kai

    As Web-based courses using videos have become popular in recent years, the issue of managing audiovisual aids has become noteworthy. The contents of audiovisual aids may include a lecture, an interview, a featurette, an experiment, etc. The audiovisual aids of Web-based courses are transformed into the streaming format that can make the quality of…

  5. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    Science.gov (United States)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  6. Efficient management and promotion of utilization of the video information acquired by observation

    Science.gov (United States)

    Kitayama, T.; Tanaka, K.; Shimabukuro, R.; Hase, H.; Ogido, M.; Nakamura, M.; Saito, H.; Hanafusa, Y.; Sonoda, A.

    2012-12-01

    In Japan Agency for Marine-Earth Science and Technology (JAMSTEC), the deep sea videos are made from the research by JAMSTEC submersibles in 1982, and the information on the huge deep-sea that will reach more 4,000 dives (ca. 24,700 tapes) by the present are opened to public via the Internet since 2002. The deep-sea videos is important because the time variation of deep-sea environment with difficult investigation and collection and growth of the living thing in extreme environment can be checked. Moreover, with development of video technique, the advanced analysis of an investigation image is attained. For grasp of deep sea environment, especially the utility value of the image is high. In JAMSTEC's Data Research Center for Marine-Earth Sciences (DrC), collection of the video are obtained by dive investigation of JAMSTEC, preservation, quality control, and open to public are performed. It is our big subject that the huge video information which utility value has expanded managed efficiently and promotion of use. In this announcement, the present measure is introduced about these subjects . The videos recorded on a tape or various media onboard are collected, and the backup and encoding for preventing the loss and degradation are performed. The video inside of a hard disk has the large file size. Then, we use the Linear Tape File System (LTFS) which attracts attention with image management engineering these days. Cost does not start compared with the usual disk backup, but correspondence years can also save the video data for a long time, and the operatively of a file is not different from a disk. The video that carried out the transcode to offer is archived by disk storage, and offer according to a use is possible for it. For the promotion of utilization of the video, the video public presentation system was reformed completely from November, 2011 to "JAMSTEC E-library of Deep Sea Images (http:// www.godac.jamstec.go.jp/jedi/)" This new system has preparing

  7. Using video playbacks to study visual communication in a marine fish, Salaria pavo.

    Science.gov (United States)

    Gonçalves; Oliveira; Körner; Poschadel; Schlupp

    2000-09-01

    Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.

  8. Computer-Aided Video Differential Planimetry

    Science.gov (United States)

    Tobin, Michael; Djoleto, Ben D.

    1984-08-01

    THE VIDEO DIFFERENTIAL PLANIMETER (VDP)1 is a re-mote sensing instrument that can measure minute changes in the area of any object seen by an optical scanning system. The composite video waveforms obtained by scanning the object against a contrasting back-ground are amplified and shaped to yield a sequence of constant amplitude pulses whose polarity distinguishes the studied area from its background and whose varying widths reflect the dynamics of the viewed object. These pulses are passed through a relatively long time-constant capacitor-resistor circuit and are then fed into an integrator. The net integration voltage resulting from the most recent sequence of object-background time pulses is recorded and the integrator is returned to zero at the end of each video frame. If the object's area remains constant throughout the following frame, the integrator's summation will also remain constant. However, if the object's area varies, the positive and negative time pulses entering the integrator will change, and the integrator's summation will vary proportionately. The addition of a computer interface and a video recorder enhances the versatility and the resolving power of the VDP by permitting the repeated study and analysis of selected portions of the recorded data, thereby uncovering the major sources of the object's dynamics. Among the medical and biological procedures for which COMPUTER-AIDED VIDEO DIFFERENTIAL PLANIMETRY is suitable are Ophthalmoscopy, Endoscopy, Microscopy, Plethysmography, etc. A recent research study in Ophthalmoscopy2 will be cited to suggest a useful application of Video Differential Planimetry.

  9. Integration of prior knowledge into dense image matching for video surveillance

    Science.gov (United States)

    Menze, M.; Heipke, C.

    2014-08-01

    Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.

  10. A software oscilloscope for DOS computers with an integrated remote control for a video tape recorder. The assignment of acoustic events to behavioural observations.

    Science.gov (United States)

    Höller, P

    1995-12-01

    With only a little knowledge of programming IBM compatible computers in Basic, it is possible to create a digital software oscilloscope with sampling rates up to 17 kHz (depending on the CPU- and bus-speed). The only additional hardware requirement is a common sound card compatible with the Soundblaster. The system presented in this paper is built to analyse the direction a flying bat is facing during sound emission. For this reason the system works with some additional hardware devices, in order to monitor video sequences at the computer screen, overlaid by an online oscillogram. Using an RS232-interface for a Panasonic video tape recorder both the oscillogram and the video tape recorder can be controlled simultaneously and moreover be analysed frame by frame. Not only acoustical events, but also APs, myograms, EEGs and other physiological data can be digitized and analysed in combination with the behavioural data of an experimental subject.

  11. Remote control video cameras on a suborbital rocket

    International Nuclear Information System (INIS)

    Wessling, Francis C.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space

  12. Video Bioinformatics Analysis of Human Embryonic Stem Cell Colony Growth

    Science.gov (United States)

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-01-01

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion. PMID:20495527

  13. An integrated video- and weight-monitoring system for the surveillance of highly enriched uranium blend down operations

    International Nuclear Information System (INIS)

    Lenarduzzi, R.; Castleberry, K.; Whitaker, M.; Martinez, R.

    1998-01-01

    An integrated video-surveillance and weight-monitoring system has been designed and constructed for tracking the blending down of weapons-grade uranium by the US Department of Energy. The instrumentation is being used by the International Atomic Energy Agency in its task of tracking and verifying the blended material at the Portsmouth Gaseous Diffusion Plant, Portsmouth, Ohio. The weight instrumentation developed at the Oak Ridge National Laboratory monitors and records the weight of cylinders of the highly enriched uranium as their contents are fed into the blending facility while the video equipment provided by Sandia National Laboratory records periodic and event triggered images of the blending area. A secure data network between the scales, cameras, and computers insures data integrity and eliminates the possibility of tampering. The details of the weight monitoring instrumentation, video- and weight-system interaction, and the secure data network is discussed

  14. Action recognition in depth video from RGB perspective: A knowledge transfer manner

    Science.gov (United States)

    Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen

    2018-03-01

    Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.

  15. Talking Video in 'Everyday Life'

    DEFF Research Database (Denmark)

    McIlvenny, Paul

    For better or worse, video technologies have made their way into many domains of social life, for example in the domain of therapeutics. Techniques such as Marte Meo, Video Interaction Guidance (ViG), Video-Enhanced Reflection on Communication, Video Home Training and Video intervention....../prevention (VIP) all promote the use of video as a therapeutic tool. This paper focuses on media therapeutics and the various in situ uses of video technologies in the mass media for therapeutic purposes. Reality TV parenting programmes such as Supernanny, Little Angels, The House of Tiny Tearaways, Honey, We...... observation and instruction (directives) relayed across different spaces; 2) the use of recorded video by participants to visualise, spatialise and localise talk and action that is distant in time and/or space; 3) the translating, stretching and cutting of social experience in and through the situated use...

  16. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    Science.gov (United States)

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  17. Extracting a Good Quality Frontal Face Image from a Low-Resolution Video Sequence

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2011-01-01

    Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images......, we use a learning-based super-resolution algorithm applied to the result of the reconstruction-based part to improve the quality by another factor of two. This results in an improvement factor of four for the entire system. The proposed system has been tested on 122 low-resolution sequences from two...... different databases. The experimental results show that the proposed system can indeed produce a high-resolution and good quality frontal face image from low-resolution video sequences....

  18. Synchronous-digitization for video rate polarization modulated beam scanning second harmonic generation microscopy

    Science.gov (United States)

    Sullivan, Shane Z.; DeWalt, Emma L.; Schmitt, Paul D.; Muir, Ryan D.; Simpson, Garth J.

    2015-03-01

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  19. Applying GA for Optimizing the User Query in Image and Video Retrieval

    OpenAIRE

    Ehsan Lotfi

    2014-01-01

    In an information retrieval system, the query can be made by user sketch. The new method presented here, optimizes the user sketch and applies the optimized query to retrieval the information. This optimization may be used in Content-Based Image Retrieval (CBIR) and Content-Based Video Retrieval (CBVR) which is based on trajectory extraction. To optimize the retrieval process, one stage of retrieval is performed by the user sketch. The retrieval criterion is based on the proposed distance met...

  20. Feasibility of video codec algorithms for software-only playback

    Science.gov (United States)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  1. A Taxonomy of Asynchronous Instructional Video Styles

    Science.gov (United States)

    Chorianopoulos, Konstantinos

    2018-01-01

    Many educational organizations are employing instructional videos in their pedagogy, but there is a limited understanding of the possible video formats. In practice, the presentation format of instructional videos ranges from direct recording of classroom teaching with a stationary camera, or screencasts with voiceover, to highly elaborate video…

  2. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  3. Unattended video surveillance systems for international safeguards

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1979-01-01

    The use of unattended video surveillance systems places some unique requirements on the systems and their hardware. The systems have the traditional requirements of video imaging, video storage, and video playback but also have some special requirements such as tamper safing. The technology available to meet these requirements and how it is being applied to unattended video surveillance systems are discussed in this paper

  4. Infrared Video Pupillography Coupled with Smart Phone LED for Measurement of Pupillary Light Reflex.

    Science.gov (United States)

    Chang, Lily Yu-Li; Turuwhenua, Jason; Qu, Tian Yuan; Black, Joanna M; Acosta, Monica L

    2017-01-01

    Clinical assessment of pupil appearance and pupillary light reflex (PLR) may inform us the integrity of the autonomic nervous system (ANS). Current clinical pupil assessment is limited to qualitative examination, and relies on clinical judgment. Infrared (IR) video pupillography combined with image processing software offer the possibility of recording quantitative parameters. In this study we describe an IR video pupillography set-up intended for human and animal testing. As part of the validation, resting pupil diameter was measured in human subjects using the NeurOptics ™ (Irvine, CA, USA) pupillometer, to compare against that measured by our IR video pupillography set-up, and PLR was assessed in guinea pigs. The set-up consisted of a smart phone with a light emitting diode (LED) strobe light (0.2 s light ON, 5 s light OFF cycles) as the stimulus and an IR camera to record pupil kinetics. The consensual response was recorded, and the video recording was processed using a custom MATLAB program. The parameters assessed were resting pupil diameter (D1), constriction velocity (CV), percentage constriction ratio, re-dilation velocity (DV) and percentage re-dilation ratio. We report that the IR video pupillography set-up provided comparable results as the NeurOptics ™ pupillometer in human subjects, and was able to detect larger resting pupil size in juvenile male guinea pigs compared to juvenile female guinea pigs. At juvenile age, male guinea pigs also had stronger pupil kinetics for both pupil constriction and dilation. Furthermore, our IR video pupillography set-up was able to detect an age-specific increase in pupil diameter (female guinea pigs only) and reduction in CV (male and female guinea pigs) as animals developed from juvenile (3 months) to adult age (7 months). This technique demonstrated accurate and quantitative assessment of pupil parameters, and may provide the foundation for further development of an integrated system useful for clinical

  5. Visualization index for image-enabled medical records

    Science.gov (United States)

    Dong, Wenjie; Zheng, Weilin; Sun, Jianyong; Zhang, Jianguo

    2011-03-01

    With the widely use of healthcare information technology in hospitals, the patients' medical records are more and more complex. To transform the text- or image-based medical information into easily understandable and acceptable form for human, we designed and developed an innovation indexing method which can be used to assign an anatomical 3D structure object to every patient visually to store indexes of the patients' basic information, historical examined image information and RIS report information. When a doctor wants to review patient historical records, he or she can first load the anatomical structure object and the view the 3D index of this object using a digital human model tool kit. This prototype system helps doctors to easily and visually obtain the complete historical healthcare status of patients, including large amounts of medical data, and quickly locate detailed information, including both reports and images, from medical information systems. In this way, doctors can save time that may be better used to understand information, obtain a more comprehensive understanding of their patients' situations, and provide better healthcare services to patients.

  6. QUANTITATIVE FLOW-ANALYSIS AROUND AQUATIC ANIMALS USING LASER SHEET PARTICLE IMAGE VELOCIMETRY

    NARCIS (Netherlands)

    STAMHUIS, EJ; VIDELER, JJ

    Two alternative particle image velocimetry (PIV) methods have been developed, applying laser light sheet illumination of particle-seeded flows around marine organisms, Successive video images, recorded perpendicular to a light sheet parallel to the main stream, were digitized and processed to map

  7. Video as a Medium for Learning and Teaching

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Videos play an important role in today's digital era. According to Cisco®, video (business and consumer combined) was  59% of the total Internet traffic in 2014. Video is permeating our educational institutions, transforming the way we teach, learn, study, communicate and work (Kaltura Report 2015). But are videos always the best choice? In this lecture we examine the benefits of the use of video in learning as well as its limits.Tips on how to minimize those limits will be explained.Example short videos that demonstrate success (or not) stories will be shown.Finally, guidelines for making good videos for education will be given. NB! All Academic Training lectures are recorded but not webcasted. The recording will be linked from this event and the CDS Academic Training collection. Participation is free. No registration needed. Bio: Pedro de Freitas has realized a MSc in learning & teaching technologies and MSc in Psychology in the University of Geneva. His thesis subject ...

  8. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... findings: 1) They are based on a collaborative approach. 2) The sketches act as a mean to externalizing hypotheses and assumptions among the participants. Based on our analysis we present an overview of factors involved in collaborative video sketching and shows how the factors relate to steps, where...... the participants: shape, record, review and edit their work, leading the participants to new insights about their work....

  9. Camac interface for digitally recording infrared camera images

    International Nuclear Information System (INIS)

    Dyer, G.R.

    1986-01-01

    An instrument has been built to store the digital signals from a modified imaging infrared scanner directly in a digital memory. This procedure avoids the signal-to-noise degradation and dynamic range limitations associated with successive analog-to-digital and digital-to-analog conversions and the analog recording method normally used to store data from the scanner. This technique also allows digital data processing methods to be applied directly to recorded data and permits processing and image reconstruction to be done using either a mainframe or a microcomputer. If a suitable computer and CAMAC-based data collection system are already available, digital storage of up to 12 scanner images can be implemented for less than $1750 in materials cost. Each image is stored as a frame of 60 x 80 eight-bit pixels, with an acquisition rate of one frame every 16.7 ms. The number of frames stored is limited only by the available memory. Initially, data processing for this equipment was done on a VAX 11-780, but images may also be displayed on the screen of a microcomputer. Software for setting the displayed gray scale, generating contour plots and false-color displays, and subtracting one image from another (e.g., background suppression) has been developed for IBM-compatible personal computers

  10. An automated form of video image analysis applied to classification of movement disorders.

    Science.gov (United States)

    Chang, R; Guan, L; Burne, J A

    Video image analysis is able to provide quantitative data on postural and movement abnormalities and thus has an important application in neurological diagnosis and management. The conventional techniques require patients to be videotaped while wearing markers in a highly structured laboratory environment. This restricts the utility of video in routine clinical practise. We have begun development of intelligent software which aims to provide a more flexible system able to quantify human posture and movement directly from whole-body images without markers and in an unstructured environment. The steps involved are to extract complete human profiles from video frames, to fit skeletal frameworks to the profiles and derive joint angles and swing distances. By this means a given posture is reduced to a set of basic parameters that can provide input to a neural network classifier. To test the system's performance we videotaped patients with dopa-responsive Parkinsonism and age-matched normals during several gait cycles, to yield 61 patient and 49 normal postures. These postures were reduced to their basic parameters and fed to the neural network classifier in various combinations. The optimal parameter sets (consisting of both swing distances and joint angles) yielded successful classification of normals and patients with an accuracy above 90%. This result demonstrated the feasibility of the approach. The technique has the potential to guide clinicians on the relative sensitivity of specific postural/gait features in diagnosis. Future studies will aim to improve the robustness of the system in providing accurate parameter estimates from subjects wearing a range of clothing, and to further improve discrimination by incorporating more stages of the gait cycle into the analysis.

  11. Data Management Rubric for Video Data in Organismal Biology.

    Science.gov (United States)

    Brainerd, Elizabeth L; Blob, Richard W; Hedrick, Tyson L; Creamer, Andrew T; Müller, Ulrike K

    2017-07-01

    Standards-based data management facilitates data preservation, discoverability, and access for effective data reuse within research groups and across communities of researchers. Data sharing requires community consensus on standards for data management, such as storage and formats for digital data preservation, metadata (i.e., contextual data about the data) that should be recorded and stored, and data access. Video imaging is a valuable tool for measuring time-varying phenotypes in organismal biology, with particular application for research in functional morphology, comparative biomechanics, and animal behavior. The raw data are the videos, but videos alone are not sufficient for scientific analysis. Nearly endless videos of animals can be found on YouTube and elsewhere on the web, but these videos have little value for scientific analysis because essential metadata such as true frame rate, spatial calibration, genus and species, weight, age, etc. of organisms, are generally unknown. We have embarked on a project to build community consensus on video data management and metadata standards for organismal biology research. We collected input from colleagues at early stages, organized an open workshop, "Establishing Standards for Video Data Management," at the Society for Integrative and Comparative Biology meeting in January 2017, and then collected two more rounds of input on revised versions of the standards. The result we present here is a rubric consisting of nine standards for video data management, with three levels within each standard: good, better, and best practices. The nine standards are: (1) data storage; (2) video file formats; (3) metadata linkage; (4) video data and metadata access; (5) contact information and acceptable use; (6) camera settings; (7) organism(s); (8) recording conditions; and (9) subject matter/topic. The first four standards address data preservation and interoperability for sharing, whereas standards 5-9 establish minimum metadata

  12. The Learning Potential of Video Sketching

    DEFF Research Database (Denmark)

    Gundersen, Peter Bukovica; Ørngreen, Rikke; Hautopp, Heidi

    2017-01-01

    , designers across various disciplines have used sketching as an integrative part of their everyday practice, and sketching has proven to have a multitude of purposes in professional design. The purpose of this paper is to explore what happens when an extra layer of video recording is added during the early...... a new one or another is rejected. Also, video can make participants very and even too self-aware, though in explanatory and persuasive sessions, this may support participants to use more precise and explicit language. Based on these experiments, four different steps of collaborative video sketching have...... been identified: shaping, recording, viewing and editing. Combined with the different modes, these steps constitute the basis of our video sketching framework. This framework has been used as a tool for redesigning learning activities. It suggests new scenarios to include in future research using...

  13. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  14. Video auto stitching in multicamera surveillance system

    Science.gov (United States)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  15. Real-time video streaming system for LHD experiment using IP multicast

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Yamamoto, Takashi; Yoshida, Masanobu; Nagayama, Yoshio; Hasegawa, Makoto

    2009-01-01

    In order to accomplish smooth cooperation research, remote participation plays an important role. For this purpose, the authors have been developing various applications for remote participation for the LHD (Large Helical Device) experiments, such as Web interface for visualization of acquired data. The video streaming system is one of them. It is useful to grasp the status of the ongoing experiment remotely, and we provide the video images displayed in the control room to the remote users. However, usual streaming servers cannot send video images without delay. The delay changes depending on how to send the images, but even a little delay might become critical if the researchers use the images to adjust the diagnostic devices. One of the main causes of delay is the procedure of compressing and decompressing the images. Furthermore, commonly used video compression method is lossy; it removes less important information to reduce the size. However, lossy images cannot be used for physical analysis because the original information is lost. Therefore, video images for remote participation should be sent without compression in order to minimize the delay and to supply high quality images durable for physical analysis. However, sending uncompressed video images requires large network bandwidth. For example, sending 5 frames of 16bit color SXGA images a second requires 100Mbps. Furthermore, the video images must be sent to several remote sites simultaneously. It is hard for a server PC to handle such a large data. To cope with this problem, the authors adopted IP multicast to send video images to several remote sites at once. Because IP multicast packets are sent only to the network on which the clients want the data; the load of the server does not depend on the number of clients and the network load is reduced. In this paper, the authors discuss the feasibility of high bandwidth video streaming system using IP multicast. (author)

  16. Digital Path Approach Despeckle Filter for Ultrasound Imaging and Video

    Directory of Open Access Journals (Sweden)

    Marek Szczepański

    2017-01-01

    Full Text Available We propose a novel filtering technique capable of reducing the multiplicative noise in ultrasound images that is an extension of the denoising algorithms based on the concept of digital paths. In this approach, the filter weights are calculated taking into account the similarity between pixel intensities that belongs to the local neighborhood of the processed pixel, which is called a path. The output of the filter is estimated as the weighted average of pixels connected by the paths. The way of creating paths is pivotal and determines the effectiveness and computational complexity of the proposed filtering design. Such procedure can be effective for different types of noise but fail in the presence of multiplicative noise. To increase the filtering efficiency for this type of disturbances, we introduce some improvements of the basic concept and new classes of similarity functions and finally extend our techniques to a spatiotemporal domain. The experimental results prove that the proposed algorithm provides the comparable results with the state-of-the-art techniques for multiplicative noise removal in ultrasound images and it can be applied for real-time image enhancement of video streams.

  17. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    Science.gov (United States)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  18. Video Transect Images (1999) from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP) (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  19. Staff acceptance of video monitoring for coordination: a video system to support perioperative situation awareness.

    Science.gov (United States)

    Kim, Young Ju; Xiao, Yan; Hu, Peter; Dutton, Richard

    2009-08-01

    To understand staff acceptance of a remote video monitoring system for operating room (OR) coordination. Improved real-time remote visual access to OR may enhance situational awareness but also raises privacy concerns for patients and staff. Survey. A system was implemented in a six-room surgical suite to display OR monitoring video at an access restricted control desk area. Image quality was manipulated to improve staff acceptance. Two months after installation, interviews and a survey were conducted on staff acceptance of video monitoring. About half of all OR personnel responded (n = 63). Overall levels of concerns were low, with 53% rated no concerns and 42% little concern. Top two reported uses of the video were to see if cases are finished and to see if a room is ready. Viewing the video monitoring system as useful did not reduce levels of concern. Staff in supervisory positions perceived less concern about the system's impact on privacy than did those supervised (p < 0.03). Concerns for patient privacy correlated with concerns for staff privacy and performance monitoring. Technical means such as manipulating image quality helped staff acceptance. Manipulation of image quality resulted overall acceptance of monitoring video, with residual levels of concerns. OR nurses may express staff privacy concern in the form of concerns over patient privacy. This study provided suggestions for technological and implementation strategies of video monitoring for coordination use in OR. Deployment of communication technology and integration of clinical information will likely raise concerns over staff privacy and performance monitoring. The potential gain of increased information access may be offset by negative impact of a sense of loss of autonomy.

  20. The Simple Video Coder: A free tool for efficiently coding social video data.

    Science.gov (United States)

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  1. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  2. Dynamic study of DSA by video-densitometry

    International Nuclear Information System (INIS)

    Imamura, Keiko; Tsukamoto, Hiroshi; Ashida, Hiroshi; Ishikawa, Tohru; Fujii, Masamichi; Uji, Teruyuki

    1985-01-01

    A system was developed for the dynamic study of DSA by video-densitometric technique. As subtraction images are stored to VTR in our DSA examinations, a frame counter was designed to select images on VTR at an arbitrary interval. ROI setting and video-densitometry were performed using a TV image processor and its host computer. Images were sampled at the rate of 3 frames per second, and clear time-density curves were obtained from brain DSA examinations. Although it takes about 30 minutes to analyse one examination, it is also possible to analyse previous data stored on VTR. For DSA systems having no additional digital storage unit, this method will be helpful. Reduction in image quality through VTR storage had no problem in video-densitometry. Phantom studies have been made concerning the temporal variation of the image brightness during the 20 second-exposure and also the effect of the subjects thickness on the contrast. Filtering for low-grade averaging is preferable for dynamic studies. (author)

  3. Images created in a model eye during simulated cataract surgery can be the basis for images perceived by patients during cataract surgery

    Science.gov (United States)

    Inoue, M; Uchida, A; Shinoda, K; Taira, Y; Noda, T; Ohnuma, K; Bissen-Miyajima, H; Hirakata, A

    2014-01-01

    Purpose To evaluate the images created in a model eye during simulated cataract surgery. Patients and methods This study was conducted as a laboratory investigation and interventional case series. An artificial opaque lens, a clear intraocular lens (IOL), or an irrigation/aspiration (I/A) tip was inserted into the ‘anterior chamber' of a model eye with the frosted posterior surface corresponding to the retina. Video images were recorded of the posterior surface of the model eye from the rear during simulated cataract surgery. The video clips were shown to 20 patients before cataract surgery, and the similarity of their visual perceptions to these images was evaluated postoperatively. Results The images of the moving lens fragments and I/A tip and the insertion of the IOL were seen from the rear. The image through the opaque lens and the IOL without moving objects was the light of the surgical microscope from the rear. However, when the microscope light was turned off after IOL insertion, the images of the microscope and operating room were observed by the room illumination from the rear. Seventy percent of the patients answered that the visual perceptions of moving lens fragments were similar to the video clips and 55% reported similarity with the IOL insertion. Eighty percent of the patients recommended that patients watch the video clip before their scheduled cataract surgery. Conclusions The patients' visual perceptions during cataract surgery can be reproduced in the model eye. Watching the video images preoperatively may help relax the patients during surgery. PMID:24788007

  4. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    Science.gov (United States)

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  5. Superimpose of images by appending two simple video amplifier circuits to color television

    International Nuclear Information System (INIS)

    Kojima, Kazuhiko; Hiraki, Tatsunosuke; Koshida, Kichiro; Maekawa, Ryuichi; Hisada, Kinichi.

    1979-01-01

    Images are very useful to obtain diagnostic informations in medical fields. Also by superimposing two or three images obtained from the same patient, various informations, for example a degree of overlapping and anatomical land mark, which can not be found in only one image, can be often found. In this paper characteristics of our trial color television system for the purpose of superimposing x-ray images and/or radionuclide images are described. This color television system superimposing two images in each different color consists of two monochromatic vidicon cameras and 20 inches conventional color television in which only two simple video amplifier circuits are added. Signals from vidicon cameras are amplified about 40 dB and are directly applied to cathode terminals of color CRT in the television. This system is very simple and economical color displays, and enhance a degree of overlapping and displacement between images. As one of typical clinical applications, pancreas images were superimposed in color by this method. As a result, size and position of pancreas was enhanced. Also x-ray image and radionuclide image were superimposed to find exactly the position of tumors. Furthermore this system was very useful for color display of multinuclides scintigraphy. (author)

  6. Superimpose of images by appending two simple video amplifier circuits to color television

    Energy Technology Data Exchange (ETDEWEB)

    Kojima, K; Hiraki, T; Koshida, K; Maekawa, R [Kanazawa Univ. (Japan). School of Paramedicine; Hisada, K

    1979-09-01

    Images are very useful to obtain diagnostic informations in medical fields. Also by superimposing two or three images obtained from the same patient, various informations, for example a degree of overlapping and anatomical land mark, which can not be found in only one image, can be often found. In this paper characteristics of our trial color television system for the purpose of superimposing x-ray images and/or radionuclide images are described. This color television system superimposing two images in each different color consists of two monochromatic vidicon cameras and 20 inches conventional color television in which only two simple video amplifier circuits are added. Signals from vidicon cameras are amplified about 40 dB and are directly applied to cathode terminals of color CRT in the television. This system is very simple and economical color displays, and enhance a degree of overlapping and displacement between images. As one of typical clinical applications, pancreas images were superimposed in color by this method. As a result, size and position of pancreas was enhanced. Also x-ray image and radionuclide image were superimposed to find exactly the position of tumors. Furthermore this system was very useful for color display of multinuclides scintigraphy.

  7. Video change detection for fixed wing UAVs

    Science.gov (United States)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the

  8. HRV based Health&Sport markers using video from the face

    OpenAIRE

    Capdevila, Ll.; Moreno, Jordi; Movellan, Javier; Parrado Romero, Eva; Ramos Castro, Juan José

    2012-01-01

    Heart Rate Variability (HRV) is an indicator of health status in the general population and of adaptatio n to stress in athletes. In this paper we compare the performance of two systems to measure HRV: (1) A commercial system based on recording the physiological cardiac signal with (2) A computer vision system that uses a standard video images of the face to estimate RR from changes in skin color of the face. We show that the computer vision system pe...

  9. Image enhancement software for underwater recovery operations: User's manual

    Science.gov (United States)

    Partridge, William J.; Therrien, Charles W.

    1989-06-01

    This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.

  10. Automated Thermal Image Processing for Detection and Classification of Birds and Bats - FY2012 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Duberstein, Corey A.; Matzner, Shari; Cullinan, Valerie I.; Virden, Daniel J.; Myers, Joshua R.; Maxwell, Adam R.

    2012-09-01

    Surveying wildlife at risk from offshore wind energy development is difficult and expensive. Infrared video can be used to record birds and bats that pass through the camera view, but it is also time consuming and expensive to review video and determine what was recorded. We proposed to conduct algorithm and software development to identify and to differentiate thermally detected targets of interest that would allow automated processing of thermal image data to enumerate birds, bats, and insects. During FY2012 we developed computer code within MATLAB to identify objects recorded in video and extract attribute information that describes the objects recorded. We tested the efficiency of track identification using observer-based counts of tracks within segments of sample video. We examined object attributes, modeled the effects of random variability on attributes, and produced data smoothing techniques to limit random variation within attribute data. We also began drafting and testing methodology to identify objects recorded on video. We also recorded approximately 10 hours of infrared video of various marine birds, passerine birds, and bats near the Pacific Northwest National Laboratory (PNNL) Marine Sciences Laboratory (MSL) at Sequim, Washington. A total of 6 hours of bird video was captured overlooking Sequim Bay over a series of weeks. An additional 2 hours of video of birds was also captured during two weeks overlooking Dungeness Bay within the Strait of Juan de Fuca. Bats and passerine birds (swallows) were also recorded at dusk on the MSL campus during nine evenings. An observer noted the identity of objects viewed through the camera concurrently with recording. These video files will provide the information necessary to produce and test software developed during FY2013. The annotation will also form the basis for creation of a method to reliably identify recorded objects.

  11. Holistic feedback approach with video and peer discussion under teacher supervision.

    Science.gov (United States)

    Hunukumbure, Agra Dilshani; Smith, Susan F; Das, Saroj

    2017-09-29

    High quality feedback is vital to learning in medical education but many students and teachers have expressed dissatisfaction on current feedback practices. Lack of teachers' insight into students' feedback requirements may be a key, which might be addressed by giving control to the students with student led feedback practices. The conceptual framework was built on three dimensions of learning theory by Illeris and Vygotsky's zone of proximal development and scaffolding. We introduced a feedback session with self-reflection and peer feedback in the form of open discussion on video-recorded student performances under teacher's guidance. The aims of this qualitative study were to explore students' perception on this holistic feedback approach and to investigate ways of maximising effective feedback and learning. Semi-structured interviews were used to gather data which were evaluated using a thematic analytical approach. The participants were third year medical students of Imperial College London on clinical placements at Hillingdon Hospital. Video based self-reflection helped some students to identify mistakes in communication and technical skills of which they were unaware prior to the session. Those who were new to video feedback found their expected self-image different to that of their actual image on video, leading to some distress. However many also identified that mistakes were not unique to themselves through peer videos and learnt from both model performances and from each other's mistakes. Balancing honest feedback with empathy was a challenge for many during peer discussion. The teacher played a vital role in making the session a success by providing guidance and a supportive environment. This study has demonstrated many potential benefits of this holistic feedback approach with video based self-reflection and peer discussion with students engaging at a deeper cognitive level than the standard descriptive feedback.

  12. A Novel Quantum Video Steganography Protocol with Large Payload Based on MCQI Quantum Video

    Science.gov (United States)

    Qu, Zhiguo; Chen, Siyi; Ji, Sai

    2017-11-01

    As one of important multimedia forms in quantum network, quantum video attracts more and more attention of experts and scholars in the world. A secure quantum video steganography protocol with large payload based on the video strip encoding method called as MCQI (Multi-Channel Quantum Images) is proposed in this paper. The new protocol randomly embeds the secret information with the form of quantum video into quantum carrier video on the basis of unique features of video frames. It exploits to embed quantum video as secret information for covert communication. As a result, its capacity are greatly expanded compared with the previous quantum steganography achievements. Meanwhile, the new protocol also achieves good security and imperceptibility by virtue of the randomization of embedding positions and efficient use of redundant frames. Furthermore, the receiver enables to extract secret information from stego video without retaining the original carrier video, and restore the original quantum video as a follow. The simulation and experiment results prove that the algorithm not only has good imperceptibility, high security, but also has large payload.

  13. Pollen Bearing Honey Bee Detection in Hive Entrance Video Recorded by Remote Embedded System for Pollination Monitoring

    Science.gov (United States)

    Babic, Z.; Pilipovic, R.; Risojevic, V.; Mirjanic, G.

    2016-06-01

    Honey bees have crucial role in pollination across the world. This paper presents a simple, non-invasive, system for pollen bearing honey bee detection in surveillance video obtained at the entrance of a hive. The proposed system can be used as a part of a more complex system for tracking and counting of honey bees with remote pollination monitoring as a final goal. The proposed method is executed in real time on embedded systems co-located with a hive. Background subtraction, color segmentation and morphology methods are used for segmentation of honey bees. Classification in two classes, pollen bearing honey bees and honey bees that do not have pollen load, is performed using nearest mean classifier, with a simple descriptor consisting of color variance and eccentricity features. On in-house data set we achieved correct classification rate of 88.7% with 50 training images per class. We show that the obtained classification results are not far behind from the results of state-of-the-art image classification methods. That favors the proposed method, particularly having in mind that real time video transmission to remote high performance computing workstation is still an issue, and transfer of obtained parameters of pollination process is much easier.

  14. Using video-based observation research methods in primary care health encounters to evaluate complex interactions.

    Science.gov (United States)

    Asan, Onur; Montague, Enid

    2014-01-01

    The purpose of this paper is to describe the use of video-based observation research methods in primary care environment and highlight important methodological considerations and provide practical guidance for primary care and human factors researchers conducting video studies to understand patient-clinician interaction in primary care settings. We reviewed studies in the literature which used video methods in health care research, and we also used our own experience based on the video studies we conducted in primary care settings. This paper highlighted the benefits of using video techniques, such as multi-channel recording and video coding, and compared "unmanned" video recording with the traditional observation method in primary care research. We proposed a list that can be followed step by step to conduct an effective video study in a primary care setting for a given problem. This paper also described obstacles, researchers should anticipate when using video recording methods in future studies. With the new technological improvements, video-based observation research is becoming a promising method in primary care and HFE research. Video recording has been under-utilised as a data collection tool because of confidentiality and privacy issues. However, it has many benefits as opposed to traditional observations, and recent studies using video recording methods have introduced new research areas and approaches.

  15. Relating pressure measurements to phenomena observed in high speed video recordings during tests of explosive charges in a semi-confined blast chamber

    CSIR Research Space (South Africa)

    Mostert, FJ

    2012-09-01

    Full Text Available initiation of the charge. It was observed in the video recordings that the detonation product cloud exhibited pulsating behaviour due to the reflected shocks in the chamber analogous to the behaviour of the gas bubble in underwater explosions. This behaviour...

  16. The use of digital imaging, video conferencing, and telepathology in histopathology: a national survey.

    Science.gov (United States)

    Dennis, T; Start, R D; Cross, S S

    2005-03-01

    To undertake a large scale survey of histopathologists in the UK to determine the current infrastructure, training, and attitudes to digital pathology. A postal questionnaire was sent to 500 consultant histopathologists randomly selected from the membership of the Royal College of Pathologists in the UK. There was a response rate of 47%. Sixty four per cent of respondents had a digital camera mounted on their microscope, but only 12% had any sort of telepathology equipment. Thirty per cent used digital images in electronic presentations at meetings at least once a year and only 24% had ever used telepathology in a diagnostic situation. Fifty nine per cent had received no training in digital imaging. Fifty eight per cent felt that the medicolegal implications of duty of care were a barrier to its use. A large proportion of pathologists (69%) were interested in using video conferencing for remote attendance at multidisciplinary team meetings. There is a reasonable level of equipment and communications infrastructure among histopathologists in the UK but a very low level of training. There is resistance to the use of telepathology in the diagnostic context but enthusiasm for the use of video conferencing in multidisciplinary team meetings.

  17. A Miniaturized Video System for Monitoring Drosophila Behavior

    Science.gov (United States)

    Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana

    2011-01-01

    populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies.

  18. Video elicitation interviews: a qualitative research method for investigating physician-patient interactions.

    Science.gov (United States)

    Henry, Stephen G; Fetters, Michael D

    2012-01-01

    We describe the concept and method of video elicitation interviews and provide practical guidance for primary care researchers who want to use this qualitative method to investigate physician-patient interactions. During video elicitation interviews, researchers interview patients or physicians about a recent clinical interaction using a video recording of that interaction as an elicitation tool. Video elicitation is useful because it allows researchers to integrate data about the content of physician-patient interactions gained from video recordings with data about participants' associated thoughts, beliefs, and emotions gained from elicitation interviews. This method also facilitates investigation of specific events or moments during interactions. Video elicitation interviews are logistically demanding and time consuming, and they should be reserved for research questions that cannot be fully addressed using either standard interviews or video recordings in isolation. As many components of primary care fall into this category, high-quality video elicitation interviews can be an important method for understanding and improving physician-patient interactions in primary care.

  19. Video Elicitation Interviews: A Qualitative Research Method for Investigating Physician-Patient Interactions

    Science.gov (United States)

    Henry, Stephen G.; Fetters, Michael D.

    2012-01-01

    We describe the concept and method of video elicitation interviews and provide practical guidance for primary care researchers who want to use this qualitative method to investigate physician-patient interactions. During video elicitation interviews, researchers interview patients or physicians about a recent clinical interaction using a video recording of that interaction as an elicitation tool. Video elicitation is useful because it allows researchers to integrate data about the content of physician-patient interactions gained from video recordings with data about participants’ associated thoughts, beliefs, and emotions gained from elicitation interviews. This method also facilitates investigation of specific events or moments during interactions. Video elicitation interviews are logistically demanding and time consuming, and they should be reserved for research questions that cannot be fully addressed using either standard interviews or video recordings in isolation. As many components of primary care fall into this category, high-quality video elicitation interviews can be an important method for understanding and improving physician-patient interactions in primary care. PMID:22412003

  20. Interaction between High-Level and Low-Level Image Analysis for Semantic Video Object Extraction

    Directory of Open Access Journals (Sweden)

    Andrea Cavallaro

    2004-06-01

    Full Text Available The task of extracting a semantic video object is split into two subproblems, namely, object segmentation and region segmentation. Object segmentation relies on a priori assumptions, whereas region segmentation is data-driven and can be solved in an automatic manner. These two subproblems are not mutually independent, and they can benefit from interactions with each other. In this paper, a framework for such interaction is formulated. This representation scheme based on region segmentation and semantic segmentation is compatible with the view that image analysis and scene understanding problems can be decomposed into low-level and high-level tasks. Low-level tasks pertain to region-oriented processing, whereas the high-level tasks are closely related to object-level processing. This approach emulates the human visual system: what one “sees” in a scene depends on the scene itself (region segmentation as well as on the cognitive task (semantic segmentation at hand. The higher-level segmentation results in a partition corresponding to semantic video objects. Semantic video objects do not usually have invariant physical properties and the definition depends on the application. Hence, the definition incorporates complex domain-specific knowledge and is not easy to generalize. For the specific implementation used in this paper, motion is used as a clue to semantic information. In this framework, an automatic algorithm is presented for computing the semantic partition based on color change detection. The change detection strategy is designed to be immune to the sensor noise and local illumination variations. The lower-level segmentation identifies the partition corresponding to perceptually uniform regions. These regions are derived by clustering in an N-dimensional feature space, composed of static as well as dynamic image attributes. We propose an interaction mechanism between the semantic and the region partitions which allows to

  1. Adaptive modeling of sky for video processing and coding applications

    NARCIS (Netherlands)

    Zafarifar, B.; With, de P.H.N.; Lagendijk, R.L.; Weber, Jos H.; Berg, van den A.F.M.

    2006-01-01

    Video content analysis for still- and moving images can be used for various applications, such as high-level semantic-driven operations or pixel-level contentdependent image manipulation. Within video content analysis, sky regions of an image form visually important objects, for which interesting

  2. Noise aliasing in interline-video-based fluoroscopy systems

    International Nuclear Information System (INIS)

    Lai, H.; Cunningham, I.A.

    2002-01-01

    Video-based imaging systems for continuous (nonpulsed) x-ray fluoroscopy use a variety of video formats. Conventional video-camera systems may operate in either interlaced or progressive-scan modes, and CCD systems may operate in interline- or frame-transfer modes. A theoretical model of the image noise power spectrum corresponding to these formats is described. It is shown that with respect to frame-transfer or progressive-readout modes, interline or interlaced cameras operating in a frame-integration mode will result in a spectral shift of 25% of the total image noise power from low spatial frequencies to high. In a field-integration mode, noise power is doubled with most of the increase occurring at high spatial frequencies. The differences are due primarily to the effect of noise aliasing. In interline or interlaced formats, alternate lines are obtained with each video field resulting in a vertical sampling frequency for noise that is one half of the physical sampling frequency. The extent of noise aliasing is modified by differences in the statistical correlations between video fields in the different modes. The theoretical model is validated with experiments using an x-ray image intensifier and CCD-camera system. It is shown that different video modes affect the shape of the noise-power spectrum and therefore the detective quantum efficiency. While the effect on observer performance is not addressed, it is concluded that in order to minimize image noise at the critical mid-to-high spatial frequencies for a specified x-ray exposure, fluoroscopic systems should use only frame-transfer (CCD camera) or progressive-scan (conventional video) formats

  3. Virtual 3D bladder reconstruction for augmented medical records from white light cystoscopy (Conference Presentation)

    Science.gov (United States)

    Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.

    2016-02-01

    Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.

  4. First year midwifery students' experience with self-recorded and assessed video of selected midwifery practice skills at Otago Polytechnic in New Zealand.

    Science.gov (United States)

    McIntosh, Carolyn; Patterson, Jean; Miller, Suzanne

    2018-01-01

    Studying undergraduate midwifery at a distance has advantages in terms of accessibility and community support but presents challenges for practice based competence assessment. Student -recorded videos provide opportunities for completing the assigned skills, self-reflection, and assessment by a lecturer. This research asked how midwifery students experienced the process of completing the Video Assessment of Midwifery Practice Skills (VAMPS) in 2014 and 2015. The aim of the survey was to identify the benefits and challenges of the VAMPS assessment and to identify opportunities for improvement from the students' perspective. All students who had participated in the VAMPS assessment during 2014 and 2015 were invited to complete an online survey. To maintain confidentiality for the students, the Qualtrics survey was administered and the data downloaded by the Organisational Research Officer. Ethical approval was granted by the organisational ethics committee. Descriptive statistics were generated and students' comments were collated. The VAMPS provided an accessible option for the competence assessment and the opportunity for self-reflection and re-recording to perfect their skill which the students appreciated. The main challenges related to the technical aspects of recording and uploading the assessment. This study highlighted some of the benefits and challenges experienced by the midwifery students and showed that practice skills can be successfully assessed at distance. The additional benefit of accessibility afforded by video assessment is a new and unique finding for undergraduate midwifery education and may resonate with other educators seeking ways to assess similar skill sets with cohorts of students studying at distance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Low complexity video encoding for UAV inspection

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Zhang, Ruo; Forchhammer, Søren

    2016-01-01

    In this work we present several methods for fast integer motion estimation of videos recorded aboard an Unmanned Aerial Vehicle (UAV). Different from related work, the field depth is not considered to be consistent. The novel methods designed for low complexity MV prediction in H.264/AVC and anal......In this work we present several methods for fast integer motion estimation of videos recorded aboard an Unmanned Aerial Vehicle (UAV). Different from related work, the field depth is not considered to be consistent. The novel methods designed for low complexity MV prediction in H.264/AVC...... for UAV infrared (IR) video are also provided....

  6. Make your own video with ActivePresenter

    CERN Document Server

    CERN. Geneva

    2016-01-01

    A step-by-step video tutorial on how to use ActivePresenter, a screen recording tool for Windows and Mac. The installation step is not needed for CERN users, as the product is already made available. This tutorial explains how to install ActivePresenter, how to do a screen recording and edit a video using ActivePresenter and finally how to exports the end product. Tell us what you think about this or any other video in this category via e-learning.support at cern.ch All info about the CERN rapid e-learning project is linked from http://twiki.cern.ch/ELearning  

  7. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  8. Reliable assessment of general surgeons' non-technical skills based on video-recordings of patient simulated scenarios.

    Science.gov (United States)

    Spanager, Lene; Beier-Holgersen, Randi; Dieckmann, Peter; Konge, Lars; Rosenberg, Jacob; Oestergaard, Doris

    2013-11-01

    Nontechnical skills are essential for safe and efficient surgery. The aim of this study was to evaluate the reliability of an assessment tool for surgeons' nontechnical skills, Non-Technical Skills for Surgeons dk (NOTSSdk), and the effect of rater training. A 1-day course was conducted for 15 general surgeons in which they rated surgeons' nontechnical skills in 9 video recordings of scenarios simulating real intraoperative situations. Data were gathered from 2 sessions separated by a 4-hour training session. Interrater reliability was high for both pretraining ratings (Cronbach's α = .97) and posttraining ratings (Cronbach's α = .98). There was no statistically significant development in assessment skills. The D study showed that 2 untrained raters or 1 trained rater was needed to obtain generalizability coefficients >.80. The high pretraining interrater reliability indicates that videos were easy to rate and Non-Technical Skills for Surgeons dk easy to use. This implies that Non-Technical Skills for Surgeons dk (NOTSSdk) could be an important tool in surgical training, potentially improving safety and quality for surgical patients. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Video transmission on ATM networks. Ph.D. Thesis

    Science.gov (United States)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  10. Video outside versus video inside the web: do media setting and image size have an impact on the emotion-evoking potential of video?

    NARCIS (Netherlands)

    Verleur, R.; Verhagen, Pleunes Willem; Crawford, Margaret; Simonson, Michael; Lamboy, Carmen

    2001-01-01

    To explore the educational potential of video-evoked affective responses in a Web-based environment, the question was raised whether video in a Web-based environment is experienced differently from video in a traditional context. An experiment was conducted that studied the affect-evoking power of

  11. PC image processing

    International Nuclear Information System (INIS)

    Hwa, Mok Jin Il; Am, Ha Jeng Ung

    1995-04-01

    This book starts summary of digital image processing and personal computer, and classification of personal computer image processing system, digital image processing, development of personal computer and image processing, image processing system, basic method of image processing such as color image processing and video processing, software and interface, computer graphics, video image and video processing application cases on image processing like satellite image processing, color transformation of image processing in high speed and portrait work system.

  12. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    Science.gov (United States)

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  13. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  14. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  15. Short-term change detection for UAV video

    Science.gov (United States)

    Saur, Günter; Krüger, Wolfgang

    2012-11-01

    In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer

  16. Multiframe digitization of x-ray (TV) images (abstract)

    Science.gov (United States)

    Karpenko, V. A.; Khil'chenko, A. D.; Lysenko, A. P.; Panchenko, V. E.

    1989-07-01

    The work in progress deals with the experimental search for a technique of digitizing x-ray TV images. The small volume of the buffer memory of the analog-to-digital (A/D) converter (ADC) we have previously used to detect TV signals made it necessary to digitize only one line at a time of the television raster and also to make use of gating to gain the video information contained in the whole frame. This paper is devoted to multiframe digitizing. The recorder of video signals comprises a broadband 8-bit A/D converter, a buffer memory having 128K words and a control circuit which forms a necessary sequence of advance pulses for the A/D converter and the memory relative to the input frame and line sync pulses (FSP and LSP). The device provides recording of video signals corresponding to one or a few frames following one after another, or to their fragments. The control circuit is responsible for the separation of the required fragment of the TV image. When loading the limit registers, the following input parameters of the control circuit are set: the skipping of a definite number of lines after the next FSP, the number of the lines of recording inside a fragment, the frequency of the information lines inside a fragment, the delay in the start of the ADC conversion relative to the arrival of the LSP, the length of the information section of a line, and the frequency of taking the readouts in a line. In addition, among the instructions given are the number of frames of recording and the frequency of their sequence. Thus, the A/D converter operates only inside a given fragment of the TV image. The information is introduced into the memory in sequence, fragment by fragment, without skipping and is then extracted as samples according to the addresses needed for representation in the required form, and processing. The video signal recorder governs the shortest time of the ADC conversion per point of 250 ns. As before, among the apparatus used were an image vidicon with

  17. Acute Pectoralis Major Rupture Captured on Video

    Directory of Open Access Journals (Sweden)

    Alejandro Ordas Bayon

    2016-01-01

    Full Text Available Pectoralis major (PM ruptures are uncommon injuries, although they are becoming more frequent. We report a case of a PM rupture in a young male who presented with axillar pain and absence of the anterior axillary fold after he perceived a snap while lifting 200 kg in the bench press. Diagnosis of PM rupture was suspected clinically and confirmed with imaging studies. The patient was treated surgically, reinserting the tendon to the humerus with suture anchors. One-year follow-up showed excellent results. The patient was recording his training on video, so we can observe in detail the most common mechanism of injury of PM rupture.

  18. Motion based parsing for video from observational psychology

    Science.gov (United States)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  19. Dancing to distraction: mediating 'docile bodies' in 'Philippine Thriller video'.

    Science.gov (United States)

    Mangaoang, Áine

    2013-01-01

    This essay examines the conditions behind the 'Philippine Prison Thriller' video, a YouTube spectacle featuring the 1,500 inmates of Cebu Provincial Detention and Rehabilitation Centre (CPDRC) dancing to Michael Jackson's hit song 'Thriller'. The video achieved viral status after it was uploaded onto the video-sharing platform in 2007, and sparked online debates as to whether this video, containing recorded moving images of allegedly forced dancing, was a form of cruel and inhumane punishment or a novel approach to rehabilitation. The immense popularity of the video inspired creative responses from viewers, and this international popularity caused the CPDRC to host a monthly live dance show held in the prison yard, now in its seventh year. The essay explores how seemingly innocuous products of user-generated-content are imbued with ideologies that obscure or reduce relations of race, agency, power and control. By contextualising the video's origins, I highlight current Philippine prison conditions and introduce how video-maker/programme inventor/prison warden Byron Garcia sought to distance his facility from the Philippine prison majority. I then investigate the 'mediation' of 'Thriller' through three main issues. One, I examine the commodification and transformation from viral video to a thana-tourist destination; two, the global appeal of 'Thriller' is founded on public penal intrigue and essentialist Filipino tropes, mixed with a certain novelty factor widely suffused in YouTube formats; three, how dance performance and its mediation here are conducive to creating Foucault's docile bodies, which operate as a tool of distraction for the masses and ultimately serve the interests of the state far more than it rehabilitates(unconvicted and therefore innocent) inmates.

  20. Musashi dynamic image processing system

    International Nuclear Information System (INIS)

    Murata, Yutaka; Mochiki, Koh-ichi; Taguchi, Akira

    1992-01-01

    In order to produce transmitted neutron dynamic images using neutron radiography, a real time system called Musashi dynamic image processing system (MDIPS) was developed to collect, process, display and record image data. The block diagram of the MDIPS is shown. The system consists of a highly sensitive, high resolution TV camera driven by a custom-made scanner, a TV camera deflection controller for optimal scanning, which adjusts to the luminous intensity and the moving speed of an object, a real-time corrector to perform the real time correction of dark current, shading distortion and field intensity fluctuation, a real time filter for increasing the image signal to noise ratio, a video recording unit and a pseudocolor monitor to realize recording in commercially available products and monitoring by means of the CRTs in standard TV scanning, respectively. The TV camera and the TV camera deflection controller utilized for producing still images can be applied to this case. The block diagram of the real-time corrector is shown. Its performance is explained. Linear filters and ranked order filters were developed. (K.I.)

  1. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    Science.gov (United States)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  2. Mapping (and modeling) physiological movements during EEG-fMRI recordings: the added value of the video acquired simultaneously.

    Science.gov (United States)

    Ruggieri, Andrea; Vaudano, Anna Elisabetta; Benuzzi, Francesca; Serafini, Marco; Gessaroli, Giuliana; Farinelli, Valentina; Nichelli, Paolo Frigio; Meletti, Stefano

    2015-01-15

    During resting-state EEG-fMRI studies in epilepsy, patients' spontaneous head-face movements occur frequently. We tested the usefulness of synchronous video recording to identify and model the fMRI changes associated with non-epileptic movements to improve sensitivity and specificity of fMRI maps related to interictal epileptiform discharges (IED). Categorization of different facial/cranial movements during EEG-fMRI was obtained for 38 patients [with benign epilepsy with centro-temporal spikes (BECTS, n=16); with idiopathic generalized epilepsy (IGE, n=17); focal symptomatic/cryptogenic epilepsy (n=5)]. We compared at single subject- and at group-level the IED-related fMRI maps obtained with and without additional regressors related to spontaneous movements. As secondary aim, we considered facial movements as events of interest to test the usefulness of video information to obtain fMRI maps of the following face movements: swallowing, mouth-tongue movements, and blinking. Video information substantially improved the identification and classification of the artifacts with respect to the EEG observation alone (mean gain of 28 events per exam). Inclusion of physiological activities as additional regressors in the GLM model demonstrated an increased Z-score and number of voxels of the global maxima and/or new BOLD clusters in around three quarters of the patients. Video-related fMRI maps for swallowing, mouth-tongue movements, and blinking were comparable to the ones obtained in previous task-based fMRI studies. Video acquisition during EEG-fMRI is a useful source of information. Modeling physiological movements in EEG-fMRI studies for epilepsy will lead to more informative IED-related fMRI maps in different epileptic conditions. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. The MIVS [Modular Integrated Video System] Image Processing System (MIPS) for assisting in the optical surveillance data review process

    International Nuclear Information System (INIS)

    Horton, R.D.

    1990-01-01

    The MIVS (Modular Integrated Video System) Image Processing System (MIPS) is designed to review MIVS surveillance data automatically and identify IAEA defined objects of safeguards interest. To achieve this, MIPS uses both digital image processing and neural network techniques to detect objects of safeguards interest in an image and assist an inspector in the review of the MIVS video tapes. MIPS must be ''trained'' i.e., given example images showing the objects that it must recognize, for each different facility. Image processing techniques are used to first identify significantly changed areas of the image. A neural network is then used to determine if the image contains the important object(s). The MIPS algorithms have demonstrated the capability to detect when a spent fuel shipping cask is present in an image after MIPS is properly trained to detect the cask. The algorithms have also demonstrated the ability to reject uninteresting background activities such as people and crane movement. When MIPS detects an important object, the corresponding image is stored to another media and later replayed for the inspector to review. The MIPS algorithms are being implemented in commercially available hardware: an image processing subsystem and an 80386 Personal Computer. MIPS will have a high-level easy-to-use system interface to allow inspectors to train MIPS on MIVS data from different facilities and on various safeguards significant objects. This paper describes the MIPS algorithms, hardware implementation, and system configuration. 3 refs., 10 figs

  4. Functional changes in the reward circuit in response to gaming-related cues after training with a commercial video game.

    Science.gov (United States)

    Gleich, Tobias; Lorenz, Robert C; Gallinat, Jürgen; Kühn, Simone

    2017-05-15

    In the present longitudinal study, we aimed to investigate video game training associated neuronal changes in reward processing using functional magnetic resonance imaging (fMRI). We recruited 48 healthy young participants which were assigned to one of 2 groups: A group in which participants were instructed to play a commercial video game ("Super Mario 64 DS") on a portable Nintendo DS handheld console at least 30minutes a day over a period of two months (video gaming group; VG) or to a matched passive control group (CG). Before and after the training phase, in both groups, fMRI imaging was conducted during passively viewing reward and punishment-related videos sequences recorded from the trained video game. The results show that video game training may lead to reward related decrease in neuronal activation in the dorsolateral prefrontal cortex (DLPFC) and increase in the hippocampus. Additionally, the decrease in DLPFC activation was associated with gaming related parameters experienced during playing. Specifically, we found that in the VG, gaming related parameters like performance, experienced fun and frustration (assessed during the training period) were correlated to decrease in reward related DLPFC activity. Thus, neuronal changes in terms of video game training seem to be highly related to the appetitive character and reinforcement schedule of the game. Those neuronal changes may also be related to the often reported video game associated improvements in cognitive functions. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. A review of techniques for the identification and measurement of fish in underwater stereo-video image sequences

    Science.gov (United States)

    Shortis, Mark R.; Ravanbakskh, Mehdi; Shaifat, Faisal; Harvey, Euan S.; Mian, Ajmal; Seager, James W.; Culverhouse, Philip F.; Cline, Danelle E.; Edgington, Duane R.

    2013-04-01

    Underwater stereo-video measurement systems are used widely for counting and measuring fish in aquaculture, fisheries and conservation management. To determine population counts, spatial or temporal frequencies, and age or weight distributions, snout to fork length measurements are captured from the video sequences, most commonly using a point and click process by a human operator. Current research aims to automate the measurement and counting task in order to improve the efficiency of the process and expand the use of stereo-video systems within marine science. A fully automated process will require the detection and identification of candidates for measurement, followed by the snout to fork length measurement, as well as the counting and tracking of fish. This paper presents a review of the techniques used for the detection, identification, measurement, counting and tracking of fish in underwater stereo-video image sequences, including consideration of the changing body shape. The review will analyse the most commonly used approaches, leading to an evaluation of the techniques most likely to be a general solution to the complete process of detection, identification, measurement, counting and tracking.

  6. LBP based detection of intestinal motility in WCE images

    Science.gov (United States)

    Gallo, Giovanni; Granata, Eliana

    2011-03-01

    In this research study, a system to support medical analysis of intestinal contractions by processing WCE images is presented. Small intestine contractions are among the motility patterns which reveal many gastrointestinal disorders, such as functional dyspepsia, paralytic ileus, irritable bowel syndrome, bacterial overgrowth. The images have been obtained using the Wireless Capsule Endoscopy (WCE) technique, a patented, video colorimaging disposable capsule. Manual annotation of contractions is an elaborating task, since the recording device of the capsule stores about 50,000 images and contractions might represent only the 1% of the whole video. In this paper we propose the use of Local Binary Pattern (LBP) combined with the powerful textons statistics to find the frames of the video related to contractions. We achieve a sensitivity of about 80% and a specificity of about 99%. The achieved high detection accuracy of the proposed system has provided thus an indication that such intelligent schemes could be used as a supplementary diagnostic tool in endoscopy.

  7. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  8. Reliability of Alberta Infant Motor Scale Using Recorded Video Observations Among the Preterm Infants in India: A Reliability Study

    Directory of Open Access Journals (Sweden)

    Veena Kirthika S

    2017-10-01

    Full Text Available Background: Assessment of motor function is a vital characteristic of infant development. Alberta Infant Motor scale (AIMS is considered to be one of the tool available for screening the developmental delays, but this scale was formulated by using western samples. Every country has its own ethnic and cultural background and various differences are observed in the culture and ethnicity. Therefore, there is a need to obtain reliability for the use of AIMS in south Indian population. Purpose: To find the intra-rater and inter-rater reliability of Alberta Infant Motor Scale (AIMS on pre-term infants using the recorded video observations in Indian population. Method: 30 preterm infants in three age groups, 0-3 months (10 infants, 4-7 months (10 infants, 8-18 months (10 infants were recruited for this reliability study. The AIMS was administered to the preterm infants and the performance was videotaped. The performance was then rescored by the same therapist, immediately from the video and on another two consecutive months to estimate intra-rater reliability using ICC (3,1, two-way mixed effects model. For reporting inter-rater reliability, AIMS was scored by three different raters, using ICC (2,k two-way random effects model and by two other therapists to examine the inter and intra-rater reliability. Results: The two-way mixed effects model for intra-rater reliability of AIMS, ICC (3,1 = 0.99 and for reporting inter-rater reliability of AIMS by two-way random effects model, ICC (2,k = 0.96. Conclusion: AIMS has excellent intra and inter-rater reliability using recorded video observations among the preterm infants in India

  9. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    Science.gov (United States)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  10. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  11. Human features detection in video surveillance

    OpenAIRE

    Barbosa, Patrícia Margarida Silva de Castro Neves

    2016-01-01

    Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores Human activity recognition algorithms have been studied actively from decades using a sequence of 2D and 3D images from a video surveillance. This new surveillance solutions and the areas of image processing and analysis have been receiving special attention and interest from the scientific community. Thus, it became possible to witness the appearance of new video compression techniques, the tr...

  12. The architecture of a video image processor for the space station

    Science.gov (United States)

    Yalamanchili, S.; Lee, D.; Fritze, K.; Carpenter, T.; Hoyme, K.; Murray, N.

    1987-01-01

    The architecture of a video image processor for space station applications is described. The architecture was derived from a study of the requirements of algorithms that are necessary to produce the desired functionality of many of these applications. Architectural options were selected based on a simulation of the execution of these algorithms on various architectural organizations. A great deal of emphasis was placed on the ability of the system to evolve and grow over the lifetime of the space station. The result is a hierarchical parallel architecture that is characterized by high level language programmability, modularity, extensibility and can meet the required performance goals.

  13. Visual hashing of digital video : applications and techniques

    NARCIS (Netherlands)

    Oostveen, J.; Kalker, A.A.C.M.; Haitsma, J.A.; Tescher, A.G.

    2001-01-01

    his paper present the concept of robust video hashing as a tool for video identification. We present considerations and a technique for (i) extracting essential perceptual features from a moving image sequences and (ii) for identifying any sufficiently long unknown video segment by efficiently

  14. VISDTA: A video imaging system for detection, tracking, and assessment: Prototype development and concept demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Pritchard, D.A.

    1987-05-01

    It has been demonstrated that thermal imagers are an effective surveillance and assessment tool for security applications because: (1) they work day or night due to their sensitivity to thermal signatures; (2) penetrability through fog, rain, dust, etc., is better than human eyes; (3) short or long range operation is possible with various optics; and (4) they are strictly passive devices providing visible imagery which is readily interpreted by the operator with little training. Unfortunately, most thermal imagers also require the setup of a tripod, connection of batteries, cables, display, etc. When this is accomplished, the operator must manually move the camera back and forth searching for signs of aggressor activity. VISDTA is designed to provide automatic panning, and in a sense, ''watch'' the imagery in place of the operator. The idea behind the development of VISDTA is to provide a small, portable, rugged system to automatically scan areas and detect targets by computer processing of images. It would use a thermal imager and possibly an intensified day/night TV camera, a pan/ tilt mount, and a computer for system control. If mounted on a dedicated vehicle or on a tower, VISDTA will perform video motion detection functions on incoming video imagery, and automatically scan predefined patterns in search of abnormal conditions which may indicate attempted intrusions into the field-of-regard. In that respect, VISDTA is capable of improving the ability of security forces to maintain security of a given area of interest by augmenting present techniques and reducing operator fatigue.

  15. Exploring Multi-Modal and Structured Representation Learning for Visual Image and Video Understanding

    OpenAIRE

    Xu, Dan

    2018-01-01

    As the explosive growth of the visual data, it is particularly important to develop intelligent visual understanding techniques for dealing with a large amount of data. Many efforts have been made in recent years to build highly effective and large-scale visual processing algorithms and systems. One of the core aspects in the research line is how to learn robust representations to better describe the data. In this thesis we study the problem of visual image and video understanding and specifi...

  16. Computed Quality Assessment of MPEG4-compressed DICOM Video Data.

    Science.gov (United States)

    Frankewitsch, Thomas; Söhnlein, Sven; Müller, Marcel; Prokosch, Hans-Ulrich

    2005-01-01

    Digital Imaging and Communication in Medicine (DICOM) has become one of the most popular standards in medicine. This standard specifies the exact procedures in which digital images are exchanged between devices, either using a network or storage medium. Sources for images vary; therefore there exist definitions for the exchange for CR, CT, NMR, angiography, sonography and so on. With its spreading, with the increasing amount of sources included, data volume is increasing, too. This affects storage and traffic. While for long-time storage data compression is generally not accepted at the moment, there are many situations where data compression is possible: Telemedicine for educational purposes (e.g. students at home using low speed internet connections), presentations with standard-resolution video projectors, or even the supply on wards combined receiving written findings. DICOM comprises compression: for still image there is JPEG, for video MPEG-2 is adopted. Within the last years MPEG-2 has been evolved to MPEG-4, which squeezes data even better, but the risk of significant errors increases, too. Within the last years effects of compression have been analyzed for entertainment movies, but these are not comparable to videos of physical examinations (e.g. echocardiography). In medical videos an individual image plays a more important role. Erroneous single images affect total quality even more. Additionally, the effect of compression can not be generalized from one test series to all videos. The result depends strongly on the source. Some investigations have been presented, where different MPEG-4 algorithms compressed videos have been compared and rated manually. But they describe only the results in an elected testbed. In this paper some methods derived from video rating are presented and discussed for an automatically created quality control for the compression of medical videos, primary stored in DICOM containers.

  17. Attaching Hollywood to a Surveillant Assemblage: Normalizing Discourses of Video Surveillance

    Directory of Open Access Journals (Sweden)

    Randy K Lippert

    2015-10-01

    Full Text Available This article examines video surveillance images in Hollywood film. It moves beyond previous accounts of video surveillance in relation to film by theoretically situating the use of these surveillance images in a broader “surveillant assemblage”. To this end, scenes from a sample of thirty-five (35 films of several genres are examined to discern dominant discourses and how they lend themselves to normalization of video surveillance. Four discourses are discovered and elaborated by providing examples from Hollywood films. While the films provide video surveillance with a positive associative association it is not without nuance and limitations. Thus, it is found that some forms of resistance to video surveillance are shown while its deterrent effect is not. It is ultimately argued that Hollywood film is becoming attached to a video surveillant assemblage discursively through these normalizing discourses as well as structurally to the extent actual video surveillance technology to produce the images is used.

  18. Segmentation of object-based video of gaze communication

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Stegmann, Mikkel Bille; Forchhammer, Søren

    2005-01-01

    Aspects of video communication based on gaze interaction are considered. The overall idea is to use gaze interaction to control video, e.g. for video conferencing. Towards this goal, animation of a facial mask is demonstrated. The animation is based on images using Active Appearance Models (AAM......). Good quality reproduction of (low-resolution) coded video of an animated facial mask as low as 10-20 kbit/s using MPEG-4 object based video is demonstated....

  19. Blur Quantification of Medical Images: Dicom Media, Whole Slide Images, Generic Images and Videos

    Directory of Open Access Journals (Sweden)

    D. Ameisen

    2016-10-01

    platform. The focus map may be displayed on the web interface next to the thumbnail link to the WSI, or in the viewer as a semi-transparent layer over the WSI, or over the WSI map. During the test phase and first integrations in laboratories and hospitals as well as in the FlexMIm project, more than 5000 whole slide images of multiple formats (Hamamatsu NDPI, Aperio SVS, Mirax MRXS, JPEG2000 … as well as hundreds of thousands of images of various formats (DICOM, TIFF, PNG, JPEG ... and videos (H264 have been analyzed using our standalone software or our C, C++, Java and Python libraries. Using default or customizable thresholds’ profiles, WSI are sorted as “accepted”, “to review”, “to rescan”. In order to target the samples contained inside each WSI, special attention was paid to detecting blank tiles. Dynamic blank tile detection based on statistical analysis of each WSI was built and successfully validated for all our samples. Results More than 20 trillion pixels have been analyzed at a 3.5 billion pixels per quad-core processor per minute speed rate. Quantified results can be stored in JSON formatted logs or inside a MySQL or MongoDB database or converted to any chosen data structure to be interoperable with existing software, each tile’s result being accessible in addition to the quality map and the global quality results. This solution is easily scalable as images can be stored at different locations, analysis can be distributed amongst local or remote servers, and quantified results can be stored in remote databases.

  20. Abnormal eating behavior in video-recorded meals in anorexia nervosa.

    Science.gov (United States)

    Gianini, Loren; Liu, Ying; Wang, Yuanjia; Attia, Evelyn; Walsh, B Timothy; Steinglass, Joanna

    2015-12-01

    Eating behavior during meals in anorexia nervosa (AN) has long been noted to be abnormal, but little research has been done carefully characterizing these behaviors. These eating behaviors have been considered pathological, but are not well understood. The current study sought to quantify ingestive and non-ingestive behaviors during a laboratory lunch meal, compare them to the behaviors of healthy controls (HC), and examine their relationships with caloric intake and anxiety during the meal. A standardized lunch meal was video-recorded for 26 individuals with AN and 10 HC. Duration, frequency, and latency of 16 mealtime behaviors were coded using computer software. Caloric intake, dietary energy density (DEDS), and anxiety were also measured. Nine mealtime behaviors were identified that distinguished AN from HC: staring at food, tearing food, nibbling/picking, dissecting food, napkin use, inappropriate utensil use, hand fidgeting, eating latency, and nibbling/picking latency. Among AN, a subset of these behaviors was related to caloric intake and anxiety. These data demonstrate that the mealtime behaviors of patients with AN and HC differ significantly, and some of these behaviors may be associated with food intake and anxiety. These mealtime behaviors may be important treatment targets to improve eating behavior in individuals with AN. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. 47 CFR 76.1710 - Operator interests in video programming.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Operator interests in video programming. 76....1710 Operator interests in video programming. (a) Cable operators are required to maintain records in... interests in all video programming services as well as information regarding their carriage of such...

  2. A model for measurement of noise in CCD digital-video cameras

    International Nuclear Information System (INIS)

    Irie, K; Woodhead, I M; McKinnon, A E; Unsworth, K

    2008-01-01

    This study presents a comprehensive measurement of CCD digital-video camera noise. Knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor. The robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise. The individual noise sources present in CCD sensors are well understood, but there has been little literature on the development of a complete noise model for CCD digital-video cameras, incorporating the effects of quantization and demosaicing

  3. Scratch's Third Body: Video Talks Back to Television

    NARCIS (Netherlands)

    Goldsmith, Leo

    2015-01-01

    abstractEmerging in the UK in the 1980s, Scratch Video established a paradoxical union of mass-media critique, Left-wing politics, and music-video and advertising aesthetics with its use of moving-image appropriation in the medium of videotape. Enabled by innovative professional and consumer video

  4. Video technical characteristics and recommendations for optical surveillance

    International Nuclear Information System (INIS)

    Wilson, G.L.; Whichello, J.V.

    1991-01-01

    The application of new video surveillance electronics to safeguards has introduced an urgent need to formulate and adopt video standards that will ensure the highest possible video quality and the orderly introduction of data insertion. Standards will provide guidance in the application of image processing and digital techniques. Realistic and practical standards are a benefit to the IAEA, Member States, Support Programme equipment developers and facility operators, as they assist in the efficient utilisation of available resources. Moreover, standards shall provide a clear path for orderly introduction of newer technologies, whilst ensuring authentication and verification of the original image through the video process. Standards emerging from IAEA are an outcome of experience based on current knowledge, both within the safeguards arena and the video parent industry which comprises commercial and professional television. This paper provides a brief synopsis of recent developments which have highlighted the need for a surveillance based video standard together with a brief outline of these standards

  5. Video-Stimulated Accounts: Young Children Accounting for Interactional Matters in Front of Peers

    Science.gov (United States)

    Theobald, Maryanne

    2012-01-01

    Research in the early years places increasing importance on participatory methods to engage children. The playback of video-recording to stimulate conversation is a research method that enables children's accounts to be heard and attends to a participatory view. During video-stimulated sessions, participants watch an extract of video-recording of…

  6. New trend of cardiac imaging

    International Nuclear Information System (INIS)

    Sugishita, Yasuro; Kakihana, Masaaki; Ohtsuka, Sadanori; Takeda, Tohru; Anno, Izumi; Akisada, Masayoshi; Hyodo, Kazuyuki; Ando, Masami.

    1990-01-01

    Synchrotron radiation is a broadspectrum intense X-ray beam. Selected X-ray wavelength was obtained by Bragg reflex. That is a monochromatic beam, which has a high spatial resolution, and has a K-edge discontinuity in attenuation coefficient, which, by energy subtraction, contributes to improve time resolution. An attempt to apply this method to intravenous coronary arteriography was performed in 7 anesthetized dogs. The beam was obtained by synchrotron radiation from accumulation ring, was reflected by silicon crystal, and was detected by 7 inch image intensifier system. Two-dimensional real time images were recorded on video tape. Phantom experiment was also performed. In dogs, coronary arteries were clearly distinguished by synchrotron radiation, especially at real time by video system. Phantom experiment suggested that coronary arteries could be visualized even over the visualized left ventricle. In conclusion, synchrotron radiation using two-dimensional real time images is expected to be useful in intravenous coronary arteriography in man. (author)

  7. Watching video games. Playing with Archaeology and Prehistory

    Directory of Open Access Journals (Sweden)

    Daniel García Raso

    2016-12-01

    Full Text Available Video games have become a mass culture phenomenon typical of the West Post-Industrial Society as well as an avant-garde narrative medium. The main focus of this paper is to explore and analyze the public image of Archaeology and Prehistory spread by video games and how we can achieve a virtual faithful image of both. Likewise, we are going to proceed to construct an archaeological outline of video games, understanding them as an element of the Contemporary Material Culture and, therefore, subject to being studied by Archaeology.

  8. Advances in pediatric gastroenterology: introducing video camera capsule endoscopy.

    Science.gov (United States)

    Siaw, Emmanuel O

    2006-04-01

    The video camera capsule endoscope is a gastrointestinal endoscope approved by the U.S. Food and Drug Administration in 2001 for use in diagnosing gastrointestinal disorders in adults. In 2003, the agency approved the device for use in children ages 10 and older, and the endoscope is currently in use at Arkansas Children's Hospital. A capsule camera, lens, battery, transmitter and antenna together record images of the small intestine as the endoscope makes its way through the bowel. The instrument is used with minimal risk to the patient while offering a high degree of accuracy in diagnosing small intestine disorders.

  9. Characterizing popularity dynamics of online videos

    Science.gov (United States)

    Ren, Zhuo-Ming; Shi, Yu-Qiang; Liao, Hao

    2016-07-01

    Online popularity has a major impact on videos, music, news and other contexts in online systems. Characterizing online popularity dynamics is nature to explain the observed properties in terms of the already acquired popularity of each individual. In this paper, we provide a quantitative, large scale, temporal analysis of the popularity dynamics in two online video-provided websites, namely MovieLens and Netflix. The two collected data sets contain over 100 million records and even span a decade. We characterize that the popularity dynamics of online videos evolve over time, and find that the dynamics of the online video popularity can be characterized by the burst behaviors, typically occurring in the early life span of a video, and later restricting to the classic preferential popularity increase mechanism.

  10. An Internet-Based Real-Time Audiovisual Link for Dual MEG Recordings.

    Directory of Open Access Journals (Sweden)

    Andrey Zhdanov

    Full Text Available Most neuroimaging studies of human social cognition have focused on brain activity of single subjects. More recently, "two-person neuroimaging" has been introduced, with simultaneous recordings of brain signals from two subjects involved in social interaction. These simultaneous "hyperscanning" recordings have already been carried out with a spectrum of neuroimaging modalities, such as functional magnetic resonance imaging (fMRI, electroencephalography (EEG, and functional near-infrared spectroscopy (fNIRS.We have recently developed a setup for simultaneous magnetoencephalographic (MEG recordings of two subjects that communicate in real time over an audio link between two geographically separated MEG laboratories. Here we present an extended version of the setup, where we have added a video connection and replaced the telephone-landline-based link with an Internet connection. Our setup enabled transmission of video and audio streams between the sites with a one-way communication latency of about 130 ms. Our software that allows reproducing the setup is publicly available.We demonstrate that the audiovisual Internet-based link can mediate real-time interaction between two subjects who try to mirror each others' hand movements that they can see via the video link. All the nine pairs were able to synchronize their behavior. In addition to the video, we captured the subjects' movements with accelerometers attached to their index fingers; we determined from these signals that the average synchronization accuracy was 215 ms. In one subject pair we demonstrate inter-subject coherence patterns of the MEG signals that peak over the sensorimotor areas contralateral to the hand used in the task.

  11. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  12. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP):Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  13. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  14. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  15. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2000 (NODC Accession 0000728)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2000 at 23 sites, some of which had multiple depths. Estimates of substrate...

  16. Video systems for alarm assessment

    International Nuclear Information System (INIS)

    Greenwoll, D.A.; Matter, J.C.; Ebel, P.E.

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs

  17. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  18. Video encoder/decoder for encoding/decoding motion compensated images

    NARCIS (Netherlands)

    1996-01-01

    Video encoder and decoder, provided with a motion compensator for motion-compensated video coding or decoding in which a picture is coded or decoded in blocks in alternately horizontal and vertical steps. The motion compensator is provided with addressing means (160) and controlled multiplexers

  19. Non-mydriatic video ophthalmoscope to measure fast temporal changes of the human retina

    Science.gov (United States)

    Tornow, Ralf P.; Kolář, Radim; Odstrčilík, Jan

    2015-07-01

    The analysis of fast temporal changes of the human retina can be used to get insight to normal physiological behavior and to detect pathological deviations. This can be important for the early detection of glaucoma and other eye diseases. We developed a small, lightweight, USB powered video ophthalmoscope that allows taking video sequences of the human retina with at least 25 frames per second without dilating the pupil. Short sequences (about 10 s) of the optic nerve head (20° x 15°) are recorded from subjects and registered offline using two-stage process (phase correlation and Lucas-Kanade approach) to compensate for eye movements. From registered video sequences, different parameters can be calculated. Two applications are described here: measurement of (i) cardiac cycle induced pulsatile reflection changes and (ii) eye movements and fixation pattern. Cardiac cycle induced pulsatile reflection changes are caused by changing blood volume in the retina. Waveform and pulse parameters like amplitude and rise time can be measured in any selected areas within the retinal image. Fixation pattern ΔY(ΔX) can be assessed from eye movements during video acquisition. The eye movements ΔX[t], ΔY[t] are derived from image registration results with high temporal (40 ms) and spatial (1,86 arcmin) resolution. Parameters of pulsatile reflection changes and fixation pattern can be affected in beginning glaucoma and the method described here may support early detection of glaucoma and other eye disease.

  20. Video capture on student-owned mobile devices to facilitate psychomotor skills acquisition: A feasibility study.

    Science.gov (United States)

    Hinck, Glori; Bergmann, Thomas F

    2013-01-01

    Objective : We evaluated the feasibility of using mobile device technology to allow students to record their own psychomotor skills so that these recordings can be used for self-reflection and formative evaluation. Methods : Students were given the choice of using DVD recorders, zip drive video capture equipment, or their personal mobile phone, device, or digital camera to record specific psychomotor skills. During the last week of the term, they were asked to complete a 9-question survey regarding their recording experience, including details of mobile phone ownership, technology preferences, technical difficulties, and satisfaction with the recording experience and video critique process. Results : Of those completing the survey, 83% currently owned a mobile phone with video capability. Of the mobile phone owners 62% reported having email capability on their phone and that they could transfer their video recording successfully to their computer, making it available for upload to the learning management system. Viewing the video recording of the psychomotor skill was valuable to 88% of respondents. Conclusions : Our results suggest that mobile phones are a viable technology to use for the video capture and critique of psychomotor skills, as most students own this technology and their satisfaction with this method is high.

  1. Fractal measures of video-recorded trajectories can classify motor subtypes in Parkinson's Disease

    Science.gov (United States)

    Figueiredo, Thiago C.; Vivas, Jamile; Peña, Norberto; Miranda, José G. V.

    2016-11-01

    Parkinson's Disease is one of the most prevalent neurodegenerative diseases in the world and affects millions of individuals worldwide. The clinical criteria for classification of motor subtypes in Parkinson's Disease are subjective and may be misleading when symptoms are not clearly identifiable. A video recording protocol was used to measure hand tremor of 14 individuals with Parkinson's Disease and 7 healthy subjects. A method for motor subtype classification was proposed based on the spectral distribution of the movement and compared with the existing clinical criteria. Box-counting dimension and Hurst Exponent calculated from the trajectories were used as the relevant measures for the statistical tests. The classification based on the power-spectrum is shown to be well suited to separate patients with and without tremor from healthy subjects and could provide clinicians with a tool to aid in the diagnosis of patients in an early stage of the disease.

  2. X-ray image intensifier photography

    International Nuclear Information System (INIS)

    Richter, K.; Angerstein, W.; Steinhardt, L.

    1980-01-01

    The present treatise on X-ray image intensifier photography starts with introductory remarks on the history of X-ray imaging and image intensifiers. In the physical-technological part especially the quality of image and the methods of its measurement are discussed in detail. The relevant equipment such as image intensifier cameras, X-ray television, video recorder and devices of display and evaluation of images are presented as well as problems of radiation doses and radiation protection. Based on 25,000 examinations of the digestive, the biliary and the urinary tract, resp., as well as of the blood vessels the applicability of the X-ray image intensifier photography and its diagnostic value are demonstrated in the medical part of the book

  3. A comparison between flexible electrogoniometers, inclinometers and three-dimensional video analysis system for recording neck movement.

    Science.gov (United States)

    Carnaz, Letícia; Moriguchi, Cristiane S; de Oliveira, Ana Beatriz; Santiago, Paulo R P; Caurin, Glauco A P; Hansson, Gert-Åke; Coury, Helenice J C Gil

    2013-11-01

    This study compared neck range of movement recording using three different methods goniometers (EGM), inclinometers (INC) and a three-dimensional video analysis system (IMG) in simultaneous and synchronized data collection. Twelve females performed neck flexion-extension, lateral flexion, rotation and circumduction. The differences between EGM, INC, and IMG were calculated sample by sample. For flexion-extension movement, IMG underestimated the amplitude by 13%; moreover, EGM showed a crosstalk of about 20% for lateral flexion and rotation axes. In lateral flexion movement, all systems showed similar amplitude and the inter-system differences were moderate (4-7%). For rotation movement, EGM showed a high crosstalk (13%) for flexion-extension axis. During the circumduction movement, IMG underestimated the amplitude of flexion-extension movements by about 11%, and the inter-system differences were high (about 17%) except for INC-IMG regarding lateral flexion (7%) and EGM-INC regarding flexion-extension (10%). For application in workplace, INC presents good results compared to IMG and EGM though INC cannot record rotation. EGM should be improved in order to reduce its crosstalk errors and allow recording of the full neck range of movement. Due to non-optimal positioning of the cameras for recording flexion-extension, IMG underestimated the amplitude of these movements. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Smartphone-based photoplethysmographic imaging for heart rate monitoring.

    Science.gov (United States)

    Alafeef, Maha

    2017-07-01

    The purpose of this study is to make use of visible light reflected mode photoplethysmographic (PPG) imaging for heart rate (HR) monitoring via smartphones. The system uses the built-in camera feature in mobile phones to capture video from the subject's index fingertip. The video is processed, and then the PPG signal resulting from the video stream processing is used to calculate the subject's heart rate. Records from 19 subjects were used to evaluate the system's performance. The HR values obtained by the proposed method were compared with the actual HR. The obtained results show an accuracy of 99.7% and a maximum absolute error of 0.4 beats/min where most of the absolute errors lay in the range of 0.04-0.3 beats/min. Given the encouraging results, this type of HR measurement can be adopted with great benefit, especially in the conditions of personal use or home-based care. The proposed method represents an efficient portable solution for HR accurate detection and recording.

  5. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  6. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  7. Image-based electronic patient records for secured collaborative medical applications.

    Science.gov (United States)

    Zhang, Jianguo; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Yao, Yihong; Cai, Weihua; Jin, Jin; Zhang, Guozhen; Sun, Kun

    2005-01-01

    We developed a Web-based system to interactively display image-based electronic patient records (EPR) for secured intranet and Internet collaborative medical applications. The system consists of four major components: EPR DICOM gateway (EPR-GW), Image-based EPR repository server (EPR-Server), Web Server and EPR DICOM viewer (EPR-Viewer). In the EPR-GW and EPR-Viewer, the security modules of Digital Signature and Authentication are integrated to perform the security processing on the EPR data with integrity and authenticity. The privacy of EPR in data communication and exchanging is provided by SSL/TLS-based secure communication. This presentation gave a new approach to create and manage image-based EPR from actual patient records, and also presented a way to use Web technology and DICOM standard to build an open architecture for collaborative medical applications.

  8. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  9. The Use of Videos in Teaching - Some Experiences From the University of Copenhagen

    Directory of Open Access Journals (Sweden)

    Henrik Bregnhøj

    2016-11-01

    Full Text Available This paper covers videos created and used in different learning patterns. The videos are grouped according to the teaching or learning activities in which they are used. One group of videos are used by the teacher for one-way communication, including: online lectures, experts interacting with one another, instruction videos and introduction videos. Further videos are teacher-student interactive videos, including: feedback on student deliveries, student productions and interactive videos. Examples from different courses at different faculties at The University of Copenhagen of different types of videos (screencasts, pencasts and different kinds of camera recordings, from quick-and-dirty videos made by teachers at their own computer to professionally produced studio recordings as well as audio files are presented with links, as an empirical basis for the discussion. The paper is very practically oriented and looks at e.g. which course design and teaching situation is suitable for which type of video; at which point is an audio file preferable to a video file; and how to produce videos easily and without specialized equipment, if you don’t have access to (or time for professional assistance. In the article, we also point out how a small amount of tips & tricks regarding planning, design and presentation technique can improve recordings made by teachers themselves. We argue that the way to work with audio and video is to start by analyzing the pedagogical needs, in this way adapting the type and use of audio and video to the pedagogical context.

  10. Interactive Video, The Next Step

    Science.gov (United States)

    Strong, L. R.; Wold-Brennon, R.; Cooper, S. K.; Brinkhuis, D.

    2012-12-01

    Video has the ingredients to reach us emotionally - with amazing images, enthusiastic interviews, music, and video game-like animations-- and it's emotion that motivates us to learn more about our new interest. However, watching video is usually passive. New web-based technology is expanding and enhancing the video experience, creating opportunities to use video with more direct interaction. This talk will look at an Educaton and Outreach team's experience producing video-centric curriculum using innovative interactive media tools from TED-Ed and FlixMaster. The Consortium for Ocean Leadership's Deep Earth Academy has partnered with the Center for Dark Energy Biosphere Investigations (C-DEBI) to send educators and a video producer aboard three deep sea research expeditions to the Juan de Fuca plate to install and service sub-seafloor observatories. This collaboration between teachers, students, scientists and media producers has proved a productive confluence, providing new ways of understanding both ground-breaking science and the process of science itself - by experimenting with new ways to use multimedia during ocean-going expeditions and developing curriculum and other projects post-cruise.

  11. A video wireless capsule endoscopy system powered wirelessly: design, analysis and experiment

    International Nuclear Information System (INIS)

    Pan, Guobing; Chen, Jiaoliao; Xin, Wenhui; Yan, Guozheng

    2011-01-01

    Wireless capsule endoscopy (WCE), as a relatively new technology, has brought about a revolution in the diagnosis of gastrointestinal (GI) tract diseases. However, the existing WCE systems are not widely applied in clinic because of the low frame rate and low image resolution. A video WCE system based on a wireless power supply is developed in this paper. This WCE system consists of a video capsule endoscope (CE), a wireless power transmission device, a receiving box and an image processing station. Powered wirelessly, the video CE has the abilities of imaging the GI tract and transmitting the images wirelessly at a frame rate of 30 frames per second (f/s). A mathematical prototype was built to analyze the power transmission system, and some experiments were performed to test the capability of energy transferring. The results showed that the wireless electric power supply system had the ability to transfer more than 136 mW power, which was enough for the working of a video CE. In in vitro experiments, the video CE produced clear images of the small intestine of a pig with the resolution of 320 × 240, and transmitted NTSC format video outside the body. Because of the wireless power supply, the video WCE system with high frame rate and high resolution becomes feasible, and provides a novel solution for the diagnosis of the GI tract in clinic

  12. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  13. Improving human object recognition performance using video enhancement techniques

    Science.gov (United States)

    Whitman, Lucy S.; Lewis, Colin; Oakley, John P.

    2004-12-01

    Atmospheric scattering causes significant degradation in the quality of video images, particularly when imaging over long distances. The principle problem is the reduction in contrast due to scattered light. It is known that when the scattering particles are not too large compared with the imaging wavelength (i.e. Mie scattering) then high spatial resolution information may be contained within a low-contrast image. Unfortunately this information is not easily perceived by a human observer, particularly when using a standard video monitor. A secondary problem is the difficulty of achieving a sharp focus since automatic focus techniques tend to fail in such conditions. Recently several commercial colour video processing systems have become available. These systems use various techniques to improve image quality in low contrast conditions whilst retaining colour content. These systems produce improvements in subjective image quality in some situations, particularly in conditions of haze and light fog. There is also some evidence that video enhancement leads to improved ATR performance when used as a pre-processing stage. Psychological literature indicates that low contrast levels generally lead to a reduction in the performance of human observers in carrying out simple visual tasks. The aim of this paper is to present the results of an empirical study on object recognition in adverse viewing conditions. The chosen visual task was vehicle number plate recognition at long ranges (500 m and beyond). Two different commercial video enhancement systems are evaluated using the same protocol. The results show an increase in effective range with some differences between the different enhancement systems.

  14. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  15. On-Board Video Recording Unravels Bird Behavior and Mortality Produced by High-Speed Trains

    Directory of Open Access Journals (Sweden)

    Eladio L. García de la Morena

    2017-10-01

    Full Text Available Large high-speed railway (HSR networks are planned for the near future to accomplish increased transport demand with low energy consumption. However, high-speed trains produce unknown avian mortality due to birds using the railway and being unable to avoid approaching trains. Safety and logistic difficulties have precluded until now mortality estimation in railways through carcass removal, but information technologies can overcome such problems. We present the results obtained with an experimental on-board system to record bird-train collisions composed by a frontal recording camera, a GPS navigation system and a data storage unit. An observer standing in the cabin behind the driver controlled the system and filled out a form with data of collisions and bird observations in front of the train. Photographs of the train front taken before and after each journey were used to improve the record of killed birds. Trains running the 321.7 km line between Madrid and Albacete (Spain at speeds up to 250–300 km/h were equipped with the system during 66 journeys along a year, totaling approximately 14,700 km of effective recording. The review of videos produced 1,090 bird observations, 29.4% of them corresponding to birds crossing the infrastructure under the catenary and thus facing collision risk. Recordings also showed that 37.7% bird crossings were of animals resting on some element of the infrastructure moments before the train arrival, and that the flight initiation distance of birds (mean ± SD was between 60 ± 33 m (passerines and 136 ± 49 m (raptors. Mortality in the railway was estimated to be 60.5 birds/km year on a line section with 53 runs per day and 26.1 birds/km year in a section with 25 runs per day. Our results are the first published estimation of bird mortality in a HSR and show the potential of information technologies to yield useful data for monitoring the impact of trains on birds via on-board recording systems. Moreover

  16. The Current State and Path Forward For Enterprise Image Viewing: HIMSS-SIIM Collaborative White Paper.

    Science.gov (United States)

    Roth, Christopher J; Lannum, Louis M; Dennison, Donald K; Towbin, Alexander J

    2016-10-01

    Clinical specialties have widely varied needs for diagnostic image interpretation, and clinical image and video image consumption. Enterprise viewers are being deployed as part of electronic health record implementations to present the broad spectrum of clinical imaging and multimedia content created in routine medical practice today. This white paper will describe the enterprise viewer use cases, drivers of recent growth, technical considerations, functionality differences between enterprise and specialty viewers, and likely future states. This white paper is aimed at CMIOs and CIOs interested in optimizing the image-enablement of their electronic health record or those who may be struggling with the many clinical image viewers their enterprises may employ today.

  17. A simplified 2D to 3D video conversion technology——taking virtual campus video production as an example

    Directory of Open Access Journals (Sweden)

    ZHUANG Huiyang

    2012-10-01

    Full Text Available This paper describes a simplified 2D to 3D Video Conversion Technology, taking virtual campus 3D video production as an example. First, it clarifies the meaning of the 2D to 3D Video Conversion Technology, and points out the disadvantages of traditional methods. Second, it forms an innovative and convenient method. A flow diagram, software and hardware configurations are presented. Finally, detailed description of the conversion steps and precautions are given in turn to the three processes, namely, preparing materials, modeling objects and baking landscapes, recording screen and converting videos .

  18. Medical video server construction.

    Science.gov (United States)

    Dańda, Jacek; Juszkiewicz, Krzysztof; Leszczuk, Mikołaj; Loziak, Krzysztof; Papir, Zdzisław; Sikora, Marek; Watza, Rafal

    2003-01-01

    The paper discusses two implementation options for a Digital Video Library, a repository used for archiving, accessing, and browsing of video medical records. Two crucial issues to be decided on are a video compression format and a video streaming platform. The paper presents numerous decision factors that have to be taken into account. The compression formats being compared are DICOM as a format representative for medical applications, both MPEGs, and several new formats targeted for an IP networking. The comparison includes transmission rates supported, compression rates, and at least options for controlling a compression process. The second part of the paper presents the ISDN technique as a solution for provisioning of tele-consultation services between medical parties that are accessing resources uploaded to a digital video library. There are several backbone techniques (like corporate LANs/WANs, leased lines or even radio/satellite links) available, however, the availability of network resources for hospitals was the prevailing choice criterion pointing to ISDN solutions. Another way to provide access to the Digital Video Library is based on radio frequency domain solutions. The paper describes possibilities of both, wireless and cellular network's data transmission service to be used as a medical video server transport layer. For the cellular net-work based solution two communication techniques are used: Circuit Switched Data and Packet Switched Data.

  19. A Physical Activity Reference Data-Set Recorded from Older Adults Using Body-Worn Inertial Sensors and Video Technology—The ADAPT Study Data-Set

    Directory of Open Access Journals (Sweden)

    Alan Kevin Bourke

    2017-03-01

    Full Text Available Physical activity monitoring algorithms are often developed using conditions that do not represent real-life activities, not developed using the target population, or not labelled to a high enough resolution to capture the true detail of human movement. We have designed a semi-structured supervised laboratory-based activity protocol and an unsupervised free-living activity protocol and recorded 20 older adults performing both protocols while wearing up to 12 body-worn sensors. Subjects’ movements were recorded using synchronised cameras (≥25 fps, both deployed in a laboratory environment to capture the in-lab portion of the protocol and a body-worn camera for out-of-lab activities. Video labelling of the subjects’ movements was performed by five raters using 11 different category labels. The overall level of agreement was high (percentage of agreement >90.05%, and Cohen’s Kappa, corrected kappa, Krippendorff’s alpha and Fleiss’ kappa >0.86. A total of 43.92 h of activities were recorded, including 9.52 h of in-lab and 34.41 h of out-of-lab activities. A total of 88.37% and 152.01% of planned transitions were recorded during the in-lab and out-of-lab scenarios, respectively. This study has produced the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate (≥25 fps video labelled data recorded in a free-living environment from older adults living independently. This dataset is suitable for validation of existing activity classification systems and development of new activity classification algorithms.

  20. Status, recent developments and perspective of TINE-powered video system, release 3

    International Nuclear Information System (INIS)

    Weisse, S.; Melkumyan, D.; Duval, P.

    2012-01-01

    Experience has shown that imaging software and hardware installations at accelerator facilities needs to be changed, adapted and updated on a semi-permanent basis. On this premise the component-based core architecture of Video System 3 was founded. In design and implementation, emphasis was, is, and will be put on flexibility, performance, low latency, modularity, inter operability, use of open source, ease of use as well as reuse, good documentation and multi-platform capability. In the past year, a milestone was reached as Video System 3 entered production-level at PITZ, Hasylab and PETRA III. Since then, the development path has been more strongly influenced by production-level experience and customer feedback. In this contribution, we describe the current status, layout, recent developments and perspective of the Video System. Focus will be put on integration of recording and playback of video sequences to Archive/DAQ, a standalone installation of the Video System on a notebook as well as experiences running on Windows 7-64 bit. In addition, new client-side multi-platform GUI/application developments using Java are about to hit the surface. Last but not least it must be mentioned that although the implementation of Release 3 is integrated into the TINE control system, it is modular enough so that integration into other control systems can be considered. (authors)

  1. A new colorimetrically-calibrated automated video-imaging protocol for day-night fish counting at the OBSEA coastal cabled observatory.

    Science.gov (United States)

    del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni

    2013-10-30

    Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented "3D Thin-Plate Spline" warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results

  2. A New Colorimetrically-Calibrated Automated Video-Imaging Protocol for Day-Night Fish Counting at the OBSEA Coastal Cabled Observatory

    Directory of Open Access Journals (Sweden)

    Joaquín del Río

    2013-10-01

    Full Text Available Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals’ visual counts per unit of time is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI, represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented “3D Thin-Plate Spline” warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6% out of 908 as a total corresponding to 18 days (at 30 min frequency. The Roberts operator (used in image processing and computer vision for edge detection was used to highlights regions of high spatial colour gradient corresponding to fishes’ bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were

  3. Surgical video recording with a modified GoPro Hero 4 camera.

    Science.gov (United States)

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  4. [Medical image compression: a review].

    Science.gov (United States)

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  5. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from year 1999 (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  6. User interface using a 3D model for video surveillance

    Science.gov (United States)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  7. The establishment of Digital Image Capture System(DICS) using conventional simulator

    International Nuclear Information System (INIS)

    Oh, Tae Sung; Park, Jong Il; Byun, Young Sik; Shin, Hyun Kyoh

    2004-01-01

    The simulator is used to determine patient field and ensure the treatment field, which encompasses the required anatomy during patient normal movement such as during breathing. The latest simulator provide real time display of still, fluoroscopic and digitalized image, but conventional simulator is not yet. The purpose of this study is to introduce digital image capture system(DICS) using conventional simulator and clinical case using digital captured still and fluoroscopic image. We connect the video signal cable to the video terminal in the back up of simulator monitor, and connect the video jack to the A/D converter. After connection between the converter jack and computer, We can acquire still image and record fluoroscopic image with operating image capture program. The data created with this system can be used in patient treatment, and modified for verification by using image processing software. (j.e. photoshop, paintshop) DICS was able to establish easy and economical procedure. DCIS image was helpful for simulation. DICS imaging was powerful tool in the evaluation of the department specific patient positioning. Because the commercialized simulator based of digital capture is very expensive, it is not easily to establish DICS simulator in the most hospital. DICS using conventional simulator enable to utilize the practical use of image equal to high cost digitalized simulator and to research many clinical cases in case of using other software program.

  8. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  9. Using airborne middle-infrared (1.45–2.0 μm) video imagery for distinguishing plant species and soil conditions

    International Nuclear Information System (INIS)

    Everitt, J.H.; Escobar, D.E.; Alaniz, M.A.; Davis, M.R.

    1987-01-01

    This paper describes the use of a black-and-white visible/infrared (0.4–2.4 μm) sensitive video camera, filtered to record radiation within the 1.45–2.0 μm middle-infrared water absorption region, for discriminating among plant species and soil conditions. The camera provided adequate quality airborne imagery that distinguished the succulent plant species onions (Allium cepum L.) and aloe vera (Aloe barbadensis Mill.) from nonsucculent plant species. Moreover, wet soil, dry crusted soil, and dry fallow soil could be differentiated in middle-infrared video images. Succulent plants, however, could not be distinguished from wet soil or water. These results show that middle-infrared video imagery has potential use for remote sensing research and applications

  10. Business Plan for a Record Company

    OpenAIRE

    Mbuthia, Alexander; Wakuwile, Janina

    2013-01-01

    The objective of this thesis is to develop a business plan for a record company named Kamoja Records in Espoo Finland that will focus on music and video production. The main purpose of this study is to determine whether this business plan is viable and whether the resulting company would be able to function as a vibrant record label. The business plan evaluates different features that are related to music and video production. The purpose is to obtain knowledge about business planning in gene...

  11. Image and video based remote target localization and tracking on smartphones

    Science.gov (United States)

    Wang, Qia; Lobzhanidze, Alex; Jang, Hyun; Zeng, Wenjun; Shang, Yi; Yang, Jingyu

    2012-06-01

    Smartphones are becoming popular nowadays not only because of its communication functionality but also, more importantly, its powerful sensing and computing capability. In this paper, we describe a novel and accurate image and video based remote target localization and tracking system using the Android smartphones, by leveraging its built-in sensors such as camera, digital compass, GPS, etc. Even though many other distance estimation or localization devices are available, our all-in-one, easy-to-use localization and tracking system on low cost and commodity smartphones is first of its kind. Furthermore, smartphones' exclusive user-friendly interface has been effectively taken advantage of by our system to facilitate low complexity and high accuracy. Our experimental results show that our system works accurately and efficiently.

  12. Dynamic Torsional and Cyclic Fracture Behavior of ProFile Rotary Instruments at Continuous or Reciprocating Rotation as Visualized with High-speed Digital Video Imaging.

    Science.gov (United States)

    Tokita, Daisuke; Ebihara, Arata; Miyara, Kana; Okiji, Takashi

    2017-08-01

    This study examined the dynamic fracture behavior of nickel-titanium rotary instruments in torsional or cyclic loading at continuous or reciprocating rotation by means of high-speed digital video imaging. The ProFile instruments (size 30, 0.06 taper; Dentsply Maillefer, Ballaigues, Switzerland) were categorized into 4 groups (n = 7 in each group) as follows: torsional/continuous (TC), torsional/reciprocating (TR), cyclic/continuous (CC), and cyclic/reciprocating (CR). Torsional loading was performed by rotating the instruments by holding the tip with a vise. For cyclic loading, a custom-made device with a 38° curvature was used. Dynamic fracture behavior was observed with a high-speed camera. The time to fracture was recorded, and the fractured surface was examined with scanning electron microscopy. The TC group initially exhibited necking of the file followed by the development of an initial crack line. The TR group demonstrated opening and closing of a crack according to its rotation in the cutting and noncutting directions, respectively. The CC group separated without any detectable signs of deformation. In the CR group, initial crack formation was recognized in 5 of 7 samples. The reciprocating rotation exhibited a longer time to fracture in both torsional and cyclic fatigue testing (P rotary instruments, as visualized with high-speed digital video imaging, varied between the different modes of rotation and different fatigue testing. Reciprocating rotation induced a slower crack propagation and conferred higher fatigue resistance than continuous rotation in both torsional and cyclic loads. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  13. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  14. Matthias Neuenhofer: Videos 1988-1995

    DEFF Research Database (Denmark)

    Kacunko, Slavko

    -reflexivity of the medium through the phenomenon of video feedback. Between 1988 and 1995 it built the basis of the video works by Matthias Neuenhofer. The presented essay on his Feedback-Videos completes the monograph-‘video-trilogy’ of Slavko Kacunko, which has begun with the book about Marcel Odenbach (1999...... of intention” (M. Baxandall): These are all characteristics of a named but not yet developed, Infinitesimal Aesthetics which ‘origin’ seems to be the repetition, which again, as much as its ‘goal’ must remain unnamed, at least if the distance to the otherwise impending visual dogmatism and image...... to allow the discovering of Histories, Coincidences, and Infinitesimal Aesthetics inscribed into the Video medium as its unsurpassed topicality. [1] Andreas Breitenstein has used this notion in his review of the book Die Winter im Süden of Norbert Gstrein (2008). In: Neue Zürcher Zeitung, 26. August 2008...

  15. Real-time digital x-ray subtraction imaging

    International Nuclear Information System (INIS)

    Mistretta, C.A.

    1982-01-01

    The invention provides a method of producing visible difference images derived from an X-ray image of an anatomical subject, comprising the steps of directing X-rays through the anatomical subject for producing an image, converting the image into television fields comprising trains of on-going video signals, digitally storing and integrating the on-going video signals over a time interval corresponding to several successive television fields and thereby producing stored and integrated video signals, recovering the video signals from storage and producing integrated video signals, producing video difference signals by performing a subtraction between the integrated video signals and the on-going video signals outside the time interval, and converting the difference signals into visible television difference images representing on-going changes in the X-ray image

  16. A multiframe soft x-ray camera with fast video capture for the LSX field reversed configuration (FRC) experiment

    International Nuclear Information System (INIS)

    Crawford, E.A.

    1992-01-01

    Soft x-ray pinhole imaging has proven to be an exceptionally useful diagnostic for qualitative observation of impurity radiation from field reversed configuration plasmas. We used a four frame device, similar in design to those discussed in an earlier paper [E. A. Crawford, D. P. Taggart, and A. D. Bailey III, Rev. Sci. Instrum. 61, 2795 (1990)] as a routine diagnostic during the last six months of the Large s Experiment (LSX) program. Our camera is an improvement over earlier implementations in several significant aspects. It was designed and used from the onset of the LSX experiments with a video frame capture system so that an instant visual record of the shot was available to the machine operator as well as facilitating quantitative interpretation of intensity information recorded in the images. The camera was installed in the end region of the LSX on axis approximately 5.5 m from the plasma midplane. Experience with bolometers on LSX showed serious problems with ''particle dumps'' at the axial location at various times during the plasma discharge. Therefore, the initial implementation of the camera included an effective magnetic sweeper assembly. Overall performance of the camera, video capture system, and sweeper is discussed

  17. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    Science.gov (United States)

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  18. Violent Interaction Detection in Video Based on Deep Learning

    Science.gov (United States)

    Zhou, Peipei; Ding, Qinghai; Luo, Haibo; Hou, Xinglin

    2017-06-01

    Violent interaction detection is of vital importance in some video surveillance scenarios like railway stations, prisons or psychiatric centres. Existing vision-based methods are mainly based on hand-crafted features such as statistic features between motion regions, leading to a poor adaptability to another dataset. En lightened by the development of convolutional networks on common activity recognition, we construct a FightNet to represent the complicated visual violence interaction. In this paper, a new input modality, image acceleration field is proposed to better extract the motion attributes. Firstly, each video is framed as RGB images. Secondly, optical flow field is computed using the consecutive frames and acceleration field is obtained according to the optical flow field. Thirdly, the FightNet is trained with three kinds of input modalities, i.e., RGB images for spatial networks, optical flow images and acceleration images for temporal networks. By fusing results from different inputs, we conclude whether a video tells a violent event or not. To provide researchers a common ground for comparison, we have collected a violent interaction dataset (VID), containing 2314 videos with 1077 fight ones and 1237 no-fight ones. By comparison with other algorithms, experimental results demonstrate that the proposed model for violent interaction detection shows higher accuracy and better robustness.

  19. Smoking in Video Games: A Systematic Review

    OpenAIRE

    Forsyth, SR; Malone, RE

    2016-01-01

    INTRODUCTION: Video games are played by a majority of adolescents, yet little is known about whether and how video games are associated with smoking behavior and attitudes. This systematic review examines research on the relationship between video games and smoking. METHODS: We searched MEDLINE, psycINFO, and Web of Science through August 20, 2014. Twenty-four studies met inclusion criteria. Studies were synthesized qualitatively in four domains: the prevalence and incidence of smoking imager...

  20. Smartphone based automatic organ validation in ultrasound video.

    Science.gov (United States)

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  1. Video Observations, Atmospheric Path, Orbit and Fragmentation Record of the Fall of the Peekskill Meteorite

    Science.gov (United States)

    Ceplecha, Z.; Brown, P.; Hawkes, R. L.; Wertherill, G.; Beech, M.; Mossman, K.

    1996-02-01

    Large Near-Earth-Asteroids have played a role in modifying the character of the surface geology of the Earth over long time scales through impacts. Recent modeling of the disruption of large meteoroids during atmospheric flight has emphasized the dramatic effects that smaller objects may also have on the Earth's surface. However, comparison of these models with observations has not been possible until now. Peekskill is only the fourth meteorite to have been recovered for which detailed and precise data exist on the meteoroid atmospheric trajectory and orbit. Consequently, there are few constraints on the position of meteorites in the solar system before impact on Earth. In this paper, the preliminary analysis based on 4 from all 15 video recordings of the fireball of October 9, 1992 which resulted in the fall of a 12.4 kg ordinary chondrite (H6 monomict breccia) in Peekskill, New York, will be given. Preliminary computations revealed that the Peekskill fireball was an Earth-grazing event, the third such case with precise data available. The body with an initial mass of the order of 104 kg was in a pre-collision orbit with a = 1.5 AU, an aphelion of slightly over 2 AU and an inclination of 5‡. The no-atmosphere geocentric trajectory would have lead to a perigee of 22 km above the Earth's surface, but the body never reached this point due to tremendous fragmentation and other forms of ablation. The dark flight of the recovered meteorite started from a height of 30 km, when the velocity dropped below 3 km/s, and the body continued 50 km more without ablation, until it hit a parked car in Peekskill, New York with a velocity of about 80 m/s. Our observations are the first video records of a bright fireball and the first motion pictures of a fireball with an associated meteorite fall.

  2. Acoustical holographic recording with coherent optical read-out and image processing

    Science.gov (United States)

    Liu, H. K.

    1980-10-01

    New acoustic holographic wave memory devices have been designed for real-time in-situ recording applications. The basic operating principles of these devices and experimental results through the use of some of the prototypes of the devices are presented. Recording media used in the device include thermoplastic resin, Crisco vegetable oil, and Wilson corn oil. In addition, nonlinear coherent optical image processing techniques including equidensitometry, A-D conversion, and pseudo-color, all based on the new contact screen technique, are discussed with regard to the enhancement of the normally poor-resolved acoustical holographic images.

  3. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    Science.gov (United States)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  4. The Effect of Motion Artifacts on Near-Infrared Spectroscopy (NIRS Data and Proposal of a Video-NIRS System

    Directory of Open Access Journals (Sweden)

    Masayuki Satoh

    2017-11-01

    Full Text Available Aims: The aims of this study were (1 to investigate the influence of physical movement on near-infrared spectroscopy (NIRS data, (2 to establish a video-NIRS system which simultaneously records NIRS data and the subject’s movement, and (3 to measure the oxygenated hemoglobin (oxy-Hb concentration change (Δoxy-Hb during a word fluency (WF task. Experiment 1: In 5 healthy volunteers, we measured the oxy-Hb and deoxygenated hemoglobin (deoxy-Hb concentrations during 11 kinds of facial, head, and extremity movements. The probes were set in the bilateral frontal regions. The deoxy-Hb concentration was increased in 85% of the measurements. Experiment 2: Using a pillow on the backrest of the chair, we established the video-NIRS system with data acquisition and video capture software. One hundred and seventy-six elderly people performed the WF task. The deoxy-Hb concentration was decreased in 167 subjects (95%. Experiment 3: Using the video-NIRS system, we measured the Δoxy-Hb, and compared it with the results of the WF task. Δoxy-Hb was significantly correlated with the number of words. Conclusion: Like the blood oxygen level-dependent imaging effect in functional MRI, the deoxy-Hb concentration will decrease if the data correctly reflect the change in neural activity. The video-NIRS system might be useful to collect NIRS data by recording the waveforms and the subject’s appearance simultaneously.

  5. Video-based Mobile Mapping System Using Smartphones

    Science.gov (United States)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  6. Quality-Aware Estimation of Facial Landmarks in Video Sequences

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Face alignment in video is a primitive step for facial image analysis. The accuracy of the alignment greatly depends on the quality of the face image in the video frames and low quality faces are proven to cause erroneous alignment. Thus, this paper proposes a system for quality aware face...... for facial landmark detection. If the face quality is low the proposed system corrects the facial landmarks that are detected by SDM. Depending upon the face velocity in consecutive video frames and face quality measure, two algorithms are proposed for correction of landmarks in low quality faces by using...

  7. Binocular video ophthalmoscope for simultaneous recording of sequences of the human retina to compare dynamic parameters

    Science.gov (United States)

    Tornow, Ralf P.; Milczarek, Aleksandra; Odstrcilik, Jan; Kolar, Radim

    2017-07-01

    A parallel video ophthalmoscope was developed to acquire short video sequences (25 fps, 250 frames) of both eyes simultaneously with exact synchronization. Video sequences were registered off-line to compensate for eye movements. From registered video sequences dynamic parameters like cardiac cycle induced reflection changes and eye movements can be calculated and compared between eyes.

  8. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  9. Identifying FRBR Work-Level Data in MARC Bibliographic Records for Manifestations of Moving Images

    Directory of Open Access Journals (Sweden)

    Lynne Bisko

    2008-12-01

    Full Text Available The library metadata community is dealing with the challenge of implementing the conceptual model, Functional Requirements for Bibliographic Records (FRBR. In response, the Online Audiovisual Catalogers (OLAC created a task force to study the issues related to creating and using FRBR-based work-level records for moving images. This article presents one part of the task force's work: it looks at the feasibility of creating provisional FRBR work-level records for moving images by extracting data from existing manifestation-level bibliographic records. Using a sample of 941 MARC records, a subgroup of the task force conducted a pilot project to look at five characteristics of moving image works. Here they discuss their methodology; analysis; selected results for two elements, original date (year and director name; and conclude with some suggested changes to MARC coding and current cataloging policy.

  10. Video incident analysis of head injuries in high school girls' lacrosse.

    Science.gov (United States)

    Caswell, Shane V; Lincoln, Andrew E; Almquist, Jon L; Dunn, Reginald E; Hinton, Richard Y

    2012-04-01

    Knowledge of injury mechanisms and game situations associated with head injuries in girls' high school lacrosse is necessary to target prevention efforts. To use video analysis and injury data to provide an objective and comprehensive visual record to identify mechanisms of injury, game characteristics, and penalties associated with head injury in girls' high school lacrosse. Descriptive epidemiology study. In the 25 public high schools of 1 school system, 529 varsity and junior varsity girls' lacrosse games were videotaped by trained videographers during the 2008 and 2009 seasons. Video of head injury incidents was examined to identify associated mechanisms and game characteristics using a lacrosse-specific coding instrument. Of the 25 head injuries (21 concussions and 4 contusions) recorded as game-related incidents by athletic trainers during the 2 seasons, 20 head injuries were captured on video, and 14 incidents had sufficient image quality for analysis. All 14 incidents of head injury (11 concussions, 3 contusions) involved varsity-level athletes. Most head injuries resulted from stick-to-head contact (n = 8), followed by body-to-head contact (n = 4). The most frequent player activities were defending a shot (n = 4) and competing for a loose ball (n = 4). Ten of the 14 head injuries occurred inside the 12-m arc and in front of the goal, and no penalty was called in 12 injury incidents. All injuries involved 2 players, and most resulted from unintentional actions. Turf versus grass did not appear to influence number of head injuries. Comprehensive video analysis suggests that play near the goal at the varsity high school level is associated with head injuries. Absence of penalty calls on most of these plays suggests an area for exploration, such as the extent to which current rules are enforced and the effectiveness of existing rules for the prevention of head injury.

  11. The Use of Smart Glasses for Surgical Video Streaming.

    Science.gov (United States)

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  12. A Peer-Reviewed Instructional Video is as Effective as a Standard Recorded Didactic Lecture in Medical Trainees Performing Chest Tube Insertion: A Randomized Control Trial.

    Science.gov (United States)

    Saun, Tomas J; Odorizzi, Scott; Yeung, Celine; Johnson, Marjorie; Bandiera, Glen; Dev, Shelly P

    Online medical education resources are becoming an increasingly used modality and many studies have demonstrated their efficacy in procedural instruction. This study sought to determine whether a standardized online procedural video is as effective as a standard recorded didactic teaching session for chest tube insertion. A randomized control trial was conducted. Participants were taught how to insert a chest tube with either a recorded didactic teaching session, or a New England Journal of Medicine (NEJM) video. Participants filled out a questionnaire before and after performing the procedure on a cadaver, which was filmed and assessed by 2 blinded evaluators using a standardized tool. Western University, London, Ontario. Level of clinical care: institutional. A total of 30 fourth-year medical students from 2 graduating classes at the Schulich School of Medicine & Dentistry were screened for eligibility. Two students did not complete the study and were excluded. There were 13 students in the NEJM group, and 15 students in the didactic group. The NEJM group׳s average score was 45.2% (±9.56) on the prequestionnaire, 67.7% (±12.9) for the procedure, and 60.1% (±7.65) on the postquestionnaire. The didactic group׳s average score was 42.8% (±10.9) on the prequestionnaire, 73.7% (±9.90) for the procedure, and 46.5% (±7.46) on the postquestionnaire. There was no difference between the groups on the prequestionnaire (Δ + 2.4%; 95% CI: -5.16 to 9.99), or the procedure (Δ -6.0%; 95% CI: -14.6 to 2.65). The NEJM group had better scores on the postquestionnaire (Δ + 11.15%; 95% CI: 3.74-18.6). The NEJM video was as effective as video-recorded didactic training for teaching the knowledge and technical skills essential for chest tube insertion. Participants expressed high satisfaction with this modality. It may prove to be a helpful adjunct to standard instruction on the topic. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc

  13. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    Science.gov (United States)

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Hierarchical Context Modeling for Video Event Recognition.

    Science.gov (United States)

    Wang, Xiaoyang; Ji, Qiang

    2016-10-11

    Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.

  15. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    OpenAIRE

    S Safinaz; A V Ravi Kumar

    2017-01-01

    In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames t...

  16. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...... average between the global quality and the local quality. Experimental results demonstrate that the combination of the global quality and local quality outperforms both sole global quality and local quality, as well as other quality models, in video quality assessment. In addition, the proposed video...... quality modeling algorithm can improve the performance of image quality metrics on video quality assessment compared to the normal averaged spatiotemporal pooling scheme....

  17. WT-Bird. Bird collision recording for offshore wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Wiggelinkhuizen, E.J.; Rademakers, L.W.M.M.; Barhorst, S.A.M. [ECN Wind Energy, Petten (Netherlands); Den Boon, H.J. [E-Connection Project, Bunnik (Netherlands); Dirksen, S. [Bureau Waardenburg, Culemborg (Netherlands); Schekkerman, H. [Alterra, Wageningen (Netherlands)

    2006-03-15

    A new method for registration of bird collisions has been developed using video cameras and microphones combined with event triggering by acoustic vibration measurement. Remote access to the recorded images and sounds makes it possible to count the number of collisions as well as to identify the species. Currently a prototype system is being tested on an offshore-scale land-based wind turbine using bird dummies. After these tests we planned to perform endurance tests on other land-based turbines under offshore-like conditions.

  18. Range-Measuring Video Sensors

    Science.gov (United States)

    Howard, Richard T.; Briscoe, Jeri M.; Corder, Eric L.; Broderick, David

    2006-01-01

    Optoelectronic sensors of a proposed type would perform the functions of both electronic cameras and triangulation- type laser range finders. That is to say, these sensors would both (1) generate ordinary video or snapshot digital images and (2) measure the distances to selected spots in the images. These sensors would be well suited to use on robots that are required to measure distances to targets in their work spaces. In addition, these sensors could be used for all the purposes for which electronic cameras have been used heretofore. The simplest sensor of this type, illustrated schematically in the upper part of the figure, would include a laser, an electronic camera (either video or snapshot), a frame-grabber/image-capturing circuit, an image-data-storage memory circuit, and an image-data processor. There would be no moving parts. The laser would be positioned at a lateral distance d to one side of the camera and would be aimed parallel to the optical axis of the camera. When the range of a target in the field of view of the camera was required, the laser would be turned on and an image of the target would be stored and preprocessed to locate the angle (a) between the optical axis and the line of sight to the centroid of the laser spot.

  19. Frequency identification of vibration signals using video camera image data.

    Science.gov (United States)

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  20. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.