WorldWideScience

Sample records for control video cameras

  1. Automated safety control by video cameras

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.; Somhorst, M.

    2012-01-01

    At this moment many surveillance systems are installed in public domains to control the safety of people and properties. They are constantly watched by human operators who are easily overloaded. To support the human operators, a surveillance system model is designed that detects suspicious behaviour

  2. Automatic Level Control for Video Cameras towards HDR Techniques

    Directory of Open Access Journals (Sweden)

    de With PeterHN

    2010-01-01

    Full Text Available We give a comprehensive overview of the complete exposure processing chain for video cameras. For each step of the automatic exposure algorithm we discuss some classical solutions and propose their improvements or give new alternatives. We start by explaining exposure metering methods, describing types of signals that are used as the scene content descriptors as well as means to utilize these descriptors. We also discuss different exposure control types used for the control of lens, integration time of the sensor, and gain control, such as a PID control, precalculated control based on the camera response function, and propose a new recursive control type that matches the underlying image formation model. Then, a description of commonly used serial control strategy for lens, sensor exposure time, and gain is presented, followed by a proposal of a new parallel control solution that integrates well with tone mapping and enhancement part of the image pipeline. Parallel control strategy enables faster and smoother control and facilitates optimally filling the dynamic range of the sensor to improve the SNR and an image contrast, while avoiding signal clipping. This is archived by the proposed special control modes used for better display and correct exposure of both low-dynamic range and high-dynamic range images. To overcome the inherited problems of limited dynamic range of capturing devices we discuss a paradigm of multiple exposure techniques. Using these techniques we can enable a correct rendering of difficult class of high-dynamic range input scenes. However, multiple exposure techniques bring several challenges, especially in the presence of motion and artificial light sources such as fluorescent lights. In particular, false colors and light-flickering problems are described. After briefly discussing some known possible solutions for the motion problem, we focus on solving the fluorescence-light problem. Thereby, we propose an algorithm for

  3. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  4. An Assignment Scheme to Control Multiple Pan/Tilt Cameras for 3D Video

    Directory of Open Access Journals (Sweden)

    Sofiane Yous

    2007-02-01

    Full Text Available This paper presents an assignment scheme to control multiple Pan/Tilt (PT cameras for 3D video of a moving object. The system combines static wide field of view (FOV cameras and active Pan/Tilt (PT cameras with narrow FOV within a networked platform. We consider the general case where the active cameras have as high resolution as they can capture only partial views of the object. The major issue is the automatic assignment of each active camera to an appropriate part of the object in order to get high-resolution images of the whole object. We propose an assignment scheme based on the analysis of a coarse 3D shape produced in a preprocessing step based on the wide-FOV images. For each high-resolution camera, we evaluate the visibility toward the different parts of the shape, corresponding to different orientations of the camera and with respect to its FOV. Then, we assign each camera to one orientation in order to get high visibility of the whole object. The continuously captured images are saved to be used offline in the reconstruction of the object. For a temporal extension of this scheme, we involve, in addition to the visibility analysis, the last camera orientation as an additional constraint. This allows smooth and optimized camera movements.

  5. Compact high-performance MWIR camera with exposure control and 12-bit video processor

    Science.gov (United States)

    Villani, Thomas S.; Loesser, Kenneth A.; Perna, Steve N.; McCarthy, D. R.; Pantuso, Francis P.

    1998-07-01

    The design and performance of a compact infrared camera system is presented. The 3 - 5 micron MWIR imaging system consists of a Stirling-cooled 640 X 480 staring PtSi infrared focal plane array (IRFPA) with a compact, high-performance 12-bit digital image processor. The low-noise CMOS IRFPA is X-Y addressable, utilizes on-chip-scanning registers and has electronic exposure control. The digital image processor uses 16-frame averaged, 2-point non-uniformity compensation and defective pixel substitution circuitry. There are separate 12- bit digital and analog I/O ports for display control and video output. The versatile camera system can be configured in NTSC, CCIR, and progressive scan readout formats and the exposure control settings are digitally programmable.

  6. Optimal camera exposure for video surveillance systems by predictive control of shutter speed, aperture, and gain

    Science.gov (United States)

    Torres, Juan; Menéndez, José Manuel

    2015-02-01

    This paper establishes a real-time auto-exposure method to guarantee that surveillance cameras in uncontrolled light conditions take advantage of their whole dynamic range while provide neither under nor overexposed images. State-of-the-art auto-exposure methods base their control on the brightness of the image measured in a limited region where the foreground objects are mostly located. Unlike these methods, the proposed algorithm establishes a set of indicators based on the image histogram that defines its shape and position. Furthermore, the location of the objects to be inspected is likely unknown in surveillance applications. Thus, the whole image is monitored in this approach. To control the camera settings, we defined a parameters function (Ef ) that linearly depends on the shutter speed and the electronic gain; and is inversely proportional to the square of the lens aperture diameter. When the current acquired image is not overexposed, our algorithm computes the value of Ef that would move the histogram to the maximum value that does not overexpose the capture. When the current acquired image is overexposed, it computes the value of Ef that would move the histogram to a value that does not underexpose the capture and remains close to the overexposed region. If the image is under and overexposed, the whole dynamic range of the camera is therefore used, and a default value of the Ef that does not overexpose the capture is selected. This decision follows the idea that to get underexposed images is better than to get overexposed ones, because the noise produced in the lower regions of the histogram can be removed in a post-processing step while the saturated pixels of the higher regions cannot be recovered. The proposed algorithm was tested in a video surveillance camera placed at an outdoor parking lot surrounded by buildings and trees which produce moving shadows in the ground. During the daytime of seven days, the algorithm was running alternatively together

  7. High-performance digital color video camera

    Science.gov (United States)

    Parulski, Kenneth A.; D'Luna, Lionel J.; Benamati, Brian L.; Shelley, Paul R.

    1992-01-01

    Typical one-chip color cameras use analog video processing circuits. An improved digital camera architecture has been developed using a dual-slope A/D conversion technique and two full-custom CMOS digital video processing integrated circuits, the color filter array (CFA) processor and the RGB postprocessor. The system used a 768 X 484 active element interline transfer CCD with a new field-staggered 3G color filter pattern and a lenslet overlay, which doubles the sensitivity of the camera. The industrial-quality digital camera design offers improved image quality, reliability, manufacturability, while meeting aggressive size, power, and cost constraints. The CFA processor digital VLSI chip includes color filter interpolation processing, an optical black clamp, defect correction, white balance, and gain control. The RGB postprocessor digital integrated circuit includes a color correction matrix, gamma correction, 2D edge enhancement, and circuits to control the black balance, lens aperture, and focus.

  8. USING A DIGITAL VIDEO CAMERA AS THE SMART SENSOR OF THE SYSTEM FOR AUTOMATIC PROCESS CONTROL OF GRANULAR FODDER MOLDING

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaya

    2014-01-01

    Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink

  9. Application of video-cameras for quality control and sampling optimisation of hydrological and erosion measurements in a catchment

    Science.gov (United States)

    Lora-Millán, Julio S.; Taguas, Encarnacion V.; Gomez, Jose A.; Perez, Rafael

    2014-05-01

    Long term soil erosion studies imply substantial efforts, particularly when there is the need to maintain continuous measurements. There are high costs associated to maintenance of field equipment keeping and quality control of data collection. Energy supply and/or electronic failures, vandalism and burglary are common causes of gaps in datasets, reducing their reach in many cases. In this work, a system of three video-cameras, a recorder and a transmission modem (3G technology) has been set up in a gauging station where rainfall, runoff flow and sediment concentration are monitored. The gauging station is located in the outlet of an olive orchard catchment of 6.4 ha. Rainfall is measured with one automatic raingauge that records intensity at one minute intervals. The discharge is measured by a flume of critical flow depth, where the water is recorded by an ultrasonic sensor. When the water level rises to a predetermined level, the automatic sampler turns on and fills a bottle at different intervals according to a program depending on the antecedent precipitation. A data logger controls the instruments' functions and records the data. The purpose of the video-camera system is to improve the quality of the dataset by i) the visual analysis of the measurement conditions of flow into the flume; ii) the optimisation of the sampling programs. The cameras are positioned to record the flow at the approximation and the gorge of the flume. In order to contrast the values of ultrasonic sensor, there is a third camera recording the flow level close to a measure tape. This system is activated when the ultrasonic sensor detects a height threshold, equivalent to an electric intensity level. Thus, only when there is enough flow, video-cameras record the event. This simplifies post-processing and reduces the cost of download of recordings. The preliminary contrast analysis will be presented as well as the main improvements in the sample program.

  10. Mirrored Light Field Video Camera Adapter

    OpenAIRE

    Tsai, Dorian; Dansereau, Donald G.; Martin, Steve; Corke, Peter

    2016-01-01

    This paper proposes the design of a custom mirror-based light field camera adapter that is cheap, simple in construction, and accessible. Mirrors of different shape and orientation reflect the scene into an upwards-facing camera to create an array of virtual cameras with overlapping field of view at specified depths, and deliver video frame rate light fields. We describe the design, construction, decoding and calibration processes of our mirror-based light field camera adapter in preparation ...

  11. Close-range photogrammetry with video cameras

    Science.gov (United States)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  12. Video Analysis with a Web Camera

    Science.gov (United States)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  13. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  14. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  15. Face identification in videos from mobile cameras

    NARCIS (Netherlands)

    Mu, Meiru; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face

  16. Study of design and control of remote manipulators. Part 4: Experiments in video camera positioning with regard to remote manipulation

    Science.gov (United States)

    Mackro, J.

    1973-01-01

    The results are presented of a study involving closed circuit television as the means of providing the necessary task-to-operator feedback for efficient performance of the remote manipulation system. Experiments were performed to determine the remote video configuration that will result in the best overall system. Two categories of tests were conducted which include: those which involved remote control position (rate) of just the video system, and those in which closed circuit TV was used along with manipulation of the objects themselves.

  17. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  18. A new, accurate and easy to implement camera and video projector model.

    Science.gov (United States)

    Hoppe, Harald; Däuber, Sascha; Kübler, Carsten; Raczkowsky, Jörg; Wörn, Heinz

    2002-01-01

    In 2000, the Institute for Process Control and Robotics/Universität Karlsruhe (TH) has developed a prototype system for projector based augmented reality consisting of a state-of-the-art PC, two CCD cameras and a video projector which is used for registration and projection of surgical planning data. Tracking, registration as well as projection require an accurate calibration process for cameras and video projectors. We have developed a new, flexible, plain and easy to implement model, which can both be used for calibration of cameras and video projectors.

  19. Refocusing images and videos with a conventional compact camera

    Science.gov (United States)

    Kang, Lai; Wu, Lingda; Wei, Yingmei; Song, Hanchen; Yang, Zheng

    2015-03-01

    Digital refocusing is an interesting and useful tool for generating dynamic depth-of-field (DOF) effects in many types of photography such as portraits and creative photography. Since most existing digital refocusing methods rely on four-dimensional light field captured by special precisely manufactured devices or a sequence of images captured by a single camera, existing systems are either expensive for wide practical use or incapable of handling dynamic scenes. We present a low-cost approach for refocusing high-resolution (up to 8 mega pixels) images and videos based on a single shot using an easy to build camera-mirror stereo system. Our proposed method consists of four main steps, namely system calibration, image rectification, disparity estimation, and refocusing rendering. The effectiveness of our proposed method has been evaluated extensively using both static and dynamic scenes with various depth ranges. Promising experimental results demonstrate that our method is able to simulate various controllable realistic DOF effects. To the best of our knowledge, our method is the first that allows one to refocus high-resolution images and videos of dynamic scenes captured by a conventional compact camera.

  20. Interfacing the Analog Camera with FPGA Board for Real-time Video Acquisition

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2014-03-01

    Full Text Available Advances in FPGA technology have dramatically increased the use of FPGAs for computer vision applications. The primary task for development of such FPGAs based systems is the interfacing of the analog camera with FPGA board. This paper describes the design and implementation of camera interface module required for connecting analog camera with Xilinx ML510 (Virtex–5 FXT FPGA board having no video input port. Digilent VDEC1 video daughter card is used for digitizing the analog video into digital form. The necessary control logics for video acquisition and video display are designed using VHDL and Verilog, simulated in ModelSim, and synthesized using Xilinx ISE 12.1. Designed and implemented interfaces provide the real-time video acquisition and display.

  1. Demonstrations of Optical Spectra with a Video Camera

    Science.gov (United States)

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  2. Automatic Person Identification in Camera Video by Motion Correlation

    Directory of Open Access Journals (Sweden)

    Dingbo Duan

    2014-01-01

    Full Text Available Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.

  3. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  4. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    Science.gov (United States)

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique.

  5. Synchronizing Light Pulses With Video Camera

    Science.gov (United States)

    Kalshoven, James E., Jr.; Tierney, Michael; Dabney, Philip

    1993-01-01

    Interface circuit triggers laser or other external source of light to flash in proper frame and field (at proper time) for video recording and playback in "pause" mode. Also increases speed of electronic shutter (if any) during affected frame to reduce visibility of background illumination relative to that of laser illumination.

  6. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  7. Fast roadway detection using car cabin video camera

    Science.gov (United States)

    Krokhina, Daria; Blinov, Veniamin; Gladilin, Sergey; Tarhanov, Ivan; Postnikov, Vassili

    2015-12-01

    We describe a fast method for road detection in images from a vehicle cabin camera. Straight section of roadway is detected using Fast Hough Transform and the method of dynamic programming. We assume that location of horizon line in the image and the road pattern are known. The developed method is fast enough to detect the roadway on each frame of the video stream in real time and may be further accelerated by the use of tracking.

  8. Identifying sports videos using replay, text, and camera motion features

    Science.gov (United States)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  9. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Cooke, William

    2016-01-01

    Current optical observations of meteors are commonly limited by systematic uncertainties in photometric calibration at the level of approximately 0.5 mag or higher. Future improvements to meteor ablation models, luminous efficiency models, or emission spectra will hinge on new camera systems and techniques that significantly reduce calibration uncertainties and can reliably perform absolute photometric measurements of meteors. In this talk we discuss the algorithms and tests that NASA's Meteoroid Environment Office (MEO) has developed to better calibrate photometric measurements for the existing All-Sky and Wide-Field video camera networks as well as for a newly deployed four-camera system for measuring meteor colors in Johnson-Cousins BV RI filters. In particular we will emphasize how the MEO has been able to address two long-standing concerns with the traditional procedure, discussed in more detail below.

  10. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro(®) 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro(®) and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro(®) and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro(®) to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro(®) 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  11. Robust distributed multiview video compression for wireless camera networks.

    Science.gov (United States)

    Yeo, Chuohao; Ramchandran, Kannan

    2010-04-01

    We present a novel framework for robustly delivering video data from distributed wireless camera networks that are characterized by packet drops. The main focus in this work is on robustness which is imminently needed in a wireless setting. We propose two alternative models to capture interview correlation among cameras with overlapping views. The view-synthesis-based correlation model requires at least two other camera views and relies on both disparity estimation and view interpolation. The disparity-based correlation model requires only one other camera view and makes use of epipolar geometry. With the proposed models, we show how interview correlation can be exploited for robustness through the use of distributed source coding. The proposed approach has low encoding complexity, is robust while satisfying tight latency constraints and requires no intercamera communication. Our experiments show that on bursty packet erasure channels, the proposed H.263+ based method outperforms baseline methods such as H.263+ with forward error correction and H.263+ with intra refresh by up to 2.5 dB. Empirical results further support the relative insensitivity of our proposed approach to the number of additional available camera views or their placement density.

  12. Observations and analysis of FTU plasmas by video cameras

    Energy Technology Data Exchange (ETDEWEB)

    De Angelis, R. [Associazione Euratom/ENEA sulla fusione, CP 65-00044 Frascati, Rome (Italy); Di Matteo, L., E-mail: lucy.dimatteo@enea.i [ENEA Fellow, Via E. Fermi, Frascati (Italy)

    2010-11-11

    The interaction of the FTU plasma with the vessel walls and with the limiters is responsible for the release of hydrogen and impurities through various physical mechanisms (physical and chemical sputtering, desorption, etc.). In the cold plasma periphery, these particles are weakly ionised and emit mainly in the visible spectral range. A good description of plasma periphery can then be obtained by use of video cameras. In FTU small size video cameras, placed close to the plasma edge, give wide-angle images of the plasma at a standard rate of 25 frames/s. Images are stored digitally, allowing their retrieval and analysis. This paper reports some of the most interesting features of the discharges evidenced by the images. As a first example, the accumulation of cold neutral gas in the plasma periphery above a density threshold (a phenomenon known as Marfe) can be seen on the video images as a toroidally symmetric band oscillating poloidally; on the multi-chord spectroscopy or bolometer channels, this appears only as a sudden rise of the signals whose overall behaviour could not be clearly interpreted. A second example is the identification of runaway discharges by the signature of the fast electrons emitting synchrotron radiation in their motion direction; this appears as a bean shaped bright spot on one toroidal side, which reverts according to plasma current direction. A relevant side effect of plasma discharges, as potentially dangerous, is the formation of dust as a consequence of some strong plasma-wall interaction event; video images allow monitoring and possibly estimating numerically the amount of dust, which can be produced in these events. Specialised software can automatically search experimental database identifying relevant events, partly overcoming the difficulties associated with the very large amount of data produced by video techniques.

  13. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    Science.gov (United States)

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  14. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  15. Social Justice through Literacy: Integrating Digital Video Cameras in Reading Summaries and Responses

    Science.gov (United States)

    Liu, Rong; Unger, John A.; Scullion, Vicki A.

    2014-01-01

    Drawing data from an action-oriented research project for integrating digital video cameras into the reading process in pre-college courses, this study proposes using digital video cameras in reading summaries and responses to promote critical thinking and to teach social justice concepts. The digital video research project is founded on…

  16. Non-mydriatic, wide field, fundus video camera

    Science.gov (United States)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  17. Scientists Behind the Camera - Increasing Video Documentation in the Field

    Science.gov (United States)

    Thomson, S.; Wolfe, J.

    2013-12-01

    Over the last two years, Skypunch Creative has designed and implemented a number of pilot projects to increase the amount of video captured by scientists in the field. The major barrier to success that we tackled with the pilot projects was the conflicting demands of the time, space, storage needs of scientists in the field and the demands of shooting high quality video. Our pilots involved providing scientists with equipment, varying levels of instruction on shooting in the field and post-production resources (editing and motion graphics). In each project, the scientific team was provided with cameras (or additional equipment if they owned their own), tripods, and sometimes sound equipment, as well as an external hard drive to return the footage to us. Upon receiving the footage we professionally filmed follow-up interviews and created animations and motion graphics to illustrate their points. We also helped with the distribution of the final product (http://climatescience.tv/2012/05/the-story-of-a-flying-hippo-the-hiaper-pole-to-pole-observation-project/ and http://climatescience.tv/2013/01/bogged-down-in-alaska/). The pilot projects were a success. Most of the scientists returned asking for additional gear and support for future field work. Moving out of the pilot phase, to continue the project, we have produced a 14 page guide for scientists shooting in the field based on lessons learned - it contains key tips and best practice techniques for shooting high quality footage in the field. We have also expanded the project and are now testing the use of video cameras that can be synced with sensors so that the footage is useful both scientifically and artistically. Extract from A Scientist's Guide to Shooting Video in the Field

  18. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  19. Video astronomy on the go using video cameras with small telescopes

    CERN Document Server

    Ashley, Joseph

    2017-01-01

    Author Joseph Ashley explains video astronomy's many benefits in this comprehensive reference guide for amateurs. Video astronomy offers a wonderful way to see objects in far greater detail than is possible through an eyepiece, and the ability to use the modern, entry-level video camera to image deep space objects is a wonderful development for urban astronomers in particular, as it helps sidestep the issue of light pollution. The author addresses both the positive attributes of these cameras for deep space imaging as well as the limitations, such as amp glow. The equipment needed for imaging as well as how it is configured is identified with hook-up diagrams and photographs. Imaging techniques are discussed together with image processing (stacking and image enhancement). Video astronomy has evolved to offer great results and great ease of use, and both novices and more experienced amateurs can use this book to find the set-up that works best for them. Flexible and portable, they open up a whole new way...

  20. Video-Camera-Based Position-Measuring System

    Science.gov (United States)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  1. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    Science.gov (United States)

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  2. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  3. Frequency identification of vibration signals using video camera image data.

    Science.gov (United States)

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  4. Simultaneous monitoring of a collapsing landslide with video cameras

    Directory of Open Access Journals (Sweden)

    K. Fujisawa

    2008-01-01

    Full Text Available Effective countermeasures and risk management to reduce landslide hazards require a full understanding of the processes of collapsing landslides. While the processes are generally estimated from the features of debris deposits after collapse, simultaneous monitoring during collapse provides more insights into the processes. Such monitoring, however, is usually very difficult, because it is rarely possible to predict when a collapse will occur. This study introduces a rare case in which a collapsing landslide (150 m in width and 135 m in height was filmed with three video cameras in Higashi-Yokoyama, Gifu Prefecture, Japan. The cameras were set up in the front and on the right and left sides of the slide in May 2006, one month after a series of small slope failures in the toe and the formation of cracks on the head indicated that a collapse was imminent.

    The filmed images showed that the landslide collapse started from rock falls and slope failures occurring mainly around the margin, that is, the head, sides and toe. These rock falls and slope failures, which were individually counted on the screen, increased with time. Analyzing the images, five of the failures were estimated to have each produced more than 1000 m3 of debris, and the landslide collapsed with several surface failures accompanied by a toppling movement. The manner of the collapse suggested that the slip surface initially remained on the upper slope, and then extended down the slope as the excessive internal stress shifted downwards. Image analysis, together with field measurements using a ground-based laser scanner after the collapse, indicated that the landslide produced a total of 50 000 m3 of debris.

    As described above, simultaneous monitoring provides valuable information about landslide processes. Further development of monitoring techniques will help clarify landslide processes qualitatively as well as quantitatively.

  5. Robust Video Stabilization Using Particle Keypoint Update and l₁-Optimized Camera Path.

    Science.gov (United States)

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-02-10

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  6. Task analysis of laparoscopic camera control schemes.

    Science.gov (United States)

    Ellis, R Darin; Munaco, Anthony J; Reisner, Luke A; Klein, Michael D; Composto, Anthony M; Pandya, Abhilash K; King, Brady W

    2016-12-01

    Minimally invasive surgeries rely on laparoscopic camera views to guide the procedure. Traditionally, an expert surgical assistant operates the camera. In some cases, a robotic system is used to help position the camera, but the surgeon is required to direct all movements of the system. Some prior research has focused on developing automated robotic camera control systems, but that work has been limited to rudimentary control schemes due to a lack of understanding of how the camera should be moved for different surgical tasks. This research used task analysis with a sample of eight expert surgeons to discover and document several salient methods of camera control and their related task contexts. Desired camera placements and behaviours were established for two common surgical subtasks (suturing and knot tying). The results can be used to develop better robotic control algorithms that will be more responsive to surgeons' needs. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  7. MAGIC-II Camera Slow Control Software

    CERN Document Server

    Steinke, B; Tridon, D Borla

    2009-01-01

    The Imaging Atmospheric Cherenkov Telescope MAGIC I has recently been extended to a stereoscopic system by adding a second 17 m telescope, MAGIC-II. One of the major improvements of the second telescope is an improved camera. The Camera Control Program is embedded in the telescope control software as an independent subsystem. The Camera Control Program is an effective software to monitor and control the camera values and their settings and is written in the visual programming language LabVIEW. The two main parts, the Central Variables File, which stores all information of the pixel and other camera parameters, and the Comm Control Routine, which controls changes in possible settings, provide a reliable operation. A safety routine protects the camera from misuse by accidental commands, from bad weather conditions and from hardware errors by automatic reactions.

  8. Acute gastroenteritis and video camera surveillance: a cruise ship case report.

    Science.gov (United States)

    Diskin, Arthur L; Caro, Gina M; Dahl, Eilif

    2014-01-01

    A 'faecal accident' was discovered in front of a passenger cabin of a cruise ship. After proper cleaning of the area the passenger was approached, but denied having any gastrointestinal symptoms. However, when confronted with surveillance camera evidence, she admitted having the accident and even bringing the towel stained with diarrhoea back to the pool towels bin. She was isolated until the next port where she was disembarked. Acute gastroenteritis (AGE) caused by Norovirus is very contagious and easily transmitted from person to person on cruise ships. The main purpose of isolation is to avoid public vomiting and faecal accidents. To quickly identify and isolate contagious passengers and crew and ensure their compliance are key elements in outbreak prevention and control, but this is difficult if ill persons deny symptoms. All passenger ships visiting US ports now have surveillance video cameras, which under certain circumstances can assist in finding potential index cases for AGE outbreaks.

  9. Development of a 3D Flash LADAR Video Camera for Entry, Decent, and Landing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera which produces 3-D point clouds at 30 Hz. Flash LADAR captures...

  10. Development of a 3D Flash LADAR Video Camera for Entry, Decent and Landing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera capable of a 30 Hz frame rate. Because Flash LADAR captures an...

  11. Using a Video Camera to Measure the Radius of the Earth

    Science.gov (United States)

    Carroll, Joshua; Hughes, Stephen

    2013-01-01

    A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…

  12. Correction of spatially varying image and video motion blur using a hybrid camera.

    Science.gov (United States)

    Tai, Yu-Wing; Du, Hao; Brown, Michael S; Lin, Stephen

    2010-06-01

    We describe a novel approach to reduce spatially varying motion blur in video and images using a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower resolution, while the lower frame-rate video has higher spatial resolution but is susceptible to motion blur. Our deblurring approach uses the data from these two video streams to reduce spatially varying motion blur in the high-resolution camera with a technique that combines both deconvolution and super-resolution. Our algorithm also incorporates a refinement of the spatially varying blur kernels to further improve results. Our approach can reduce motion blur from the high-resolution video as well as estimate new high-resolution frames at a higher frame rate. Experimental results on a variety of inputs demonstrate notable improvement over current state-of-the-art methods in image/video deblurring.

  13. Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.

    Science.gov (United States)

    Foss, Kurt; Kahan, Robert S.

    This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…

  14. Prompting Spontaneity by Means of the Video Camera in the Beginning Foreign Language Class.

    Science.gov (United States)

    Pelletier, Raymond J.

    1990-01-01

    Describes four techniques for using a video camera to generate higher levels of student interest, involvement, and productivity in beginning foreign language courses. The techniques include spontaneous discussion of video images, enhancement of students' use of interrogative pronouns and phrases, grammar instruction, and student-produced skits.…

  15. Digital video technology and production 101: lights, camera, action.

    Science.gov (United States)

    Elliot, Diane L; Goldberg, Linn; Goldberg, Michael J

    2014-01-01

    Videos are powerful tools for enhancing the reach and effectiveness of health promotion programs. They can be used for program promotion and recruitment, for training program implementation staff/volunteers, and as elements of an intervention. Although certain brief videos may be produced without technical assistance, others often require collaboration and contracting with professional videographers. To get practitioners started and to facilitate interactions with professional videographers, this Tool includes a guide to the jargon of video production and suggestions for how to integrate videos into health education and promotion work. For each type of video, production principles and issues to consider when working with a professional videographer are provided. The Tool also includes links to examples in each category of video applications to health promotion.

  16. Super deep 3D images from a 3D omnifocus video camera.

    Science.gov (United States)

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  17. video114_0402c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  18. video114_0402b -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  19. video115_0403 -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  20. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  1. A Video Camera Road Sign System of the Early Warning from Collision with the Wild Animals

    Directory of Open Access Journals (Sweden)

    Matuska Slavomir

    2016-05-01

    Full Text Available This paper proposes a camera road sign system of the early warning, which can help to avoid from vehicle collision with the wild animals. The system consists of camera modules placed down the particularly chosen route and the intelligent road signs. The camera module consists of the camera device and the computing unit. The video stream is captured from video camera using computing unit. Then the algorithms of object detection are deployed. Afterwards, the machine learning algorithms will be used to classify the moving objects. If the moving object is classified as animal and this animal can be dangerous for safety of the vehicle, warning will be displayed on the intelligent road sings.

  2. Planetary camera control improves microfiche production

    Science.gov (United States)

    Chesterton, W. L.; Lewis, E. B.

    1965-01-01

    Microfiche is prepared using an automatic control system for a planetary camera. The system provides blank end-of-row exposures and signals card completion so the legend of the next card may by photographed.

  3. I'm camera shy; should my practice install video surveillance cameras?

    National Research Council Canada - National Science Library

    2010-01-01

    ... the use of cameras is generally sufficient to meet legal requirements." For veterinary hospitals, concerns usually center on surveillance cameras pointed at team members, Dr. Allen says. There are two issues. First, employees may be upset by this new symbol of mistrust in their warm workplace home. To offset this reaction, it's a good idea to ease into the...

  4. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    Science.gov (United States)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the

  5. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  6. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  7. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  8. Design and Optimization of the VideoWeb Wireless Camera Network

    Directory of Open Access Journals (Sweden)

    Nguyen HoangThanh

    2010-01-01

    Full Text Available Sensor networks have been a very active area of research in recent years. However, most of the sensors used in the development of these networks have been local and nonimaging sensors such as acoustics, seismic, vibration, temperature, humidity. The emerging development of video sensor networks poses its own set of unique challenges, including high-bandwidth and low latency requirements for real-time processing and control. This paper presents a systematic approach by detailing the design, implementation, and evaluation of a large-scale wireless camera network, suitable for a variety of practical real-time applications. We take into consideration issues related to hardware, software, control, architecture, network connectivity, performance evaluation, and data-processing strategies for the network. We also perform multiobjective optimization on settings such as video resolution and compression quality to provide insight into the performance trade-offs when configuring such a network and present lessons learned in the building and daily usage of the network.

  9. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    Science.gov (United States)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  10. Online coupled camera pose estimation and dense reconstruction from video

    Science.gov (United States)

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  11. Towards Adaptive Virtual Camera Control In Computer Games

    OpenAIRE

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platf...

  12. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    Science.gov (United States)

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  13. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams

    Science.gov (United States)

    Becker, Stefan; Scherer-Negenborn, Norbert; Thakkar, Pooja; Hübner, Wolfgang; Arens, Michael

    2016-10-01

    This paper is a continuation of the work of Becker et al.1 In their work, they analyzed the robustness of various background subtraction algorithms on fused video streams originating from visible and infrared cameras. In order to cover a broader range of background subtraction applications, we show the effects of fusing infrared-visible video streams from vibrating cameras on a large set of background subtraction algorithms. The effectiveness is quantitatively analyzed on recorded data of a typical outdoor sequence with a fine-grained and accurate annotation of the images. Thereby, we identify approaches which can benefit from fused sensor signals with camera jitter. Finally conclusions on what fusion strategies should be preferred under such conditions are given.

  14. Disparity Map Generation Based on Trapezoidal Camera Architecture for Multi-View Video

    Directory of Open Access Journals (Sweden)

    Abdulkadir Iyyaka Audu

    2014-12-01

    Full Text Available Visual content acquisition is a strategic functional block of any visual system. Despite its wide possibilities, the arrangement of cameras for the acquisition of good quality visual content for use in multi-view video remains a huge challenge. This paper presents the mathematical description of trapezoidal camera architecture and relationships which facilitate the determination of camera position for visual content acquisition in multi-view video, and depth map generation. The strong point of Trapezoidal Camera Architecture is that it allows for adaptive camera topology by which points within the scene, especially the occluded ones can be optically and geometrically viewed from several different viewpoints either on the edge of the trapezoid or inside it. The concept of maximum independent set, trapezoid characteristics, and the fact that the positions of cameras (with the exception of few differ in their vertical coordinate description could very well be used to address the issue of occlusion which continues to be a major problem in computer vision with regards to the generation of depth map

  15. User interface design for iOS camera application : project: designing gif video camera application

    OpenAIRE

    Kim, Erika

    2016-01-01

    The objective of this thesis was to examine the fundamentals and basic principles of great user experience and user interface design. The focus was on laying out theoretical foundations and applying them into a practical end result – a camera application with a simple and user friendly graphical interface. In order to set up the foundations of the thesis, a comprehensive research into the many different factors that, as a whole, make up a good user experience was conducted. Throughout section...

  16. Digital Camera Control for Faster Inspection

    Science.gov (United States)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  17. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  18. Surgical video recording with a modified GoPro Hero 4 camera

    Directory of Open Access Journals (Sweden)

    Lin LK

    2016-01-01

    Full Text Available Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method: The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results: Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion: The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. Keywords: teaching, oculoplastic, strabismus

  19. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

    Directory of Open Access Journals (Sweden)

    Mateusz Brzeszcz

    2015-08-01

    Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

  20. 基于FPGA的SDI到Camera Link视频接口转换系统设计%Design of Video Interface Conversion System from SDI to Camera Link Based on FPGA

    Institute of Scientific and Technical Information of China (English)

    朱超; 刘艳滢; 董月芳

    2011-01-01

    Aimed at camera with SDI interface output, a video interface conversion system that transforms SDI input into Camera Link output is designed and implemented, and Xilinx Corporation' s Spartan-3E XC3S250E is used as the main control chip. The cable equalizing, data retiming and video decoding circuit of SDI signal, and the data stream de-interweave, storage, color space conversion and the Camera Link timing generator module of FPGA are introduced in detail. Combined with actual application, the SDI video signal output from camera can be input into the frame grabber with the Camera Link interface by this system, which makes video displaying and processing more conveniently.%针对具有SDI接口输出的相机,采用Xilinx公司Spartan-3E系列的XC3S250E作为主控制芯片,设计并实现了由SDI输入到Camera Link输出的视频接口转换系统.详细介绍了SDI信号的电缆均衡、重新定时锁相、解码电路以及FPGA的数据流解交织、存储、彩色空间变换和Camera Link时序发生模块等.该系统结合实际应用,可使相机输出的SDI视频信号经转换后输入到具有Camera Link接口的图像采集卡上,便于图像的显示和处理.

  1. A novel method to reduce time investment when processing videos from camera trap studies.

    Directory of Open Access Journals (Sweden)

    Kristijn R R Swinnen

    Full Text Available Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber. However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings, making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and

  2. A digital underwater video camera system for aquatic research in regulated rivers

    Science.gov (United States)

    Martin, Benjamin M.; Irwin, Elise R.

    2010-01-01

    We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.

  3. A passive terahertz video camera based on lumped element kinetic inductance detectors.

    Science.gov (United States)

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Wood, Ken; Ade, Peter A R; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; Grainger, William; House, Julian; Mauskopf, Philip; Moseley, Paul; Spencer, Locke; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian

    2016-03-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)--designed originally for far-infrared astronomy--as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  4. A passive THz video camera based on lumped element kinetic inductance detectors

    CERN Document Server

    Rowe, Sam; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Wood, Ken; Ade, Peter A R; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; Grainger, William; House, Julian; Mauskopf, Philip; Moseley, Paul; Spencer, Locke; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian

    2015-01-01

    We have developed a passive 350 GHz (850 {\\mu}m) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs) -- designed originally for far-infrared astronomy -- as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of $\\sim$0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  5. Performance evaluation of a two detector camera for real-time video.

    Science.gov (United States)

    Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo

    2016-12-20

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.

  6. Operation and maintenance manual for the high resolution stereoscopic video camera system (HRSVS) system 6230

    Energy Technology Data Exchange (ETDEWEB)

    Pardini, A.F., Westinghouse Hanford

    1996-07-16

    The High Resolution Stereoscopic Video Cameral System (HRSVS),system 6230, is a stereoscopic camera system that will be used as an end effector on the LDUA to perform surveillance and inspection activities within Hanford waste tanks. It is attached to the LDUA by means of a Tool Interface Plate (TIP), which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate.

  7. Human Daily Activities Indexing in Videos from Wearable Cameras for Monitoring of Patients with Dementia Diseases

    CERN Document Server

    Karaman, Svebor; Mégret, Rémi; Dovgalecs, Vladislavs; Dartigues, Jean-François; Gaëstel, Yann

    2010-01-01

    Our research focuses on analysing human activities according to a known behaviorist scenario, in case of noisy and high dimensional collected data. The data come from the monitoring of patients with dementia diseases by wearable cameras. We define a structural model of video recordings based on a Hidden Markov Model. New spatio-temporal features, color features and localization features are proposed as observations. First results in recognition of activities are promising.

  8. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  9. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  10. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System.

    Science.gov (United States)

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-06-25

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  11. CRED Fish Observations from Stereo Video Cameras on a SeaBED AUV collected around Tutuila, American Samoa in 2012

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Black and white imagery were collected using a stereo pair of underwater video cameras mounted on a SeaBED autonomous underwater vehicle (AUV) and deployed around...

  12. Design of IP Camera Access Control Protocol by Utilizing Hierarchical Group Key

    Directory of Open Access Journals (Sweden)

    Jungho Kang

    2015-08-01

    Full Text Available Unlike CCTV, security video surveillance devices, which we have generally known about, IP cameras which are connected to a network either with or without wire, provide monitoring services through a built-in web-server. Due to the fact that IP cameras can use a network such as the Internet, multiple IP cameras can be installed at a long distance and each IP camera can utilize the function of a web server individually. Even though IP cameras have this kind of advantage, it has difficulties in access control management and weakness in user certification, too. Particularly, because the market of IP cameras did not begin to be realized a long while ago, systems which are systematized from the perspective of security have not been built up yet. Additionally, it contains severe weaknesses in terms of access authority to the IP camera web server, certification of users, and certification of IP cameras which are newly installed within a network, etc. This research grouped IP cameras hierarchically to manage them systematically, and provided access control and data confidentiality between groups by utilizing group keys. In addition, IP cameras and users are certified by using PKI-based certification, and weak points of security such as confidentiality and integrity, etc., are improved by encrypting passwords. Thus, this research presents specific protocols of the entire process and proved through experiments that this method can be actually applied.

  13. Analysis of the technical biases of meteor video cameras used in the CILBO system

    Science.gov (United States)

    Albin, Thomas; Koschny, Detlef; Molau, Sirko; Srama, Ralf; Poppe, Björn

    2017-02-01

    In this paper, we analyse the technical biases of two intensified video cameras, ICC7 and ICC9, of the double-station meteor camera system CILBO (Canary Island Long-Baseline Observatory). This is done to thoroughly understand the effects of the camera systems on the scientific data analysis. We expect a number of errors or biases that come from the system: instrumental errors, algorithmic errors and statistical errors. We analyse different observational properties, in particular the detected meteor magnitudes, apparent velocities, estimated goodness-of-fit of the astrometric measurements with respect to a great circle and the distortion of the camera. We find that, due to a loss of sensitivity towards the edges, the cameras detect only about 55 % of the meteors it could detect if it had a constant sensitivity. This detection efficiency is a function of the apparent meteor velocity. We analyse the optical distortion of the system and the goodness-of-fit of individual meteor position measurements relative to a fitted great circle. The astrometric error is dominated by uncertainties in the measurement of the meteor attributed to blooming, distortion of the meteor image and the development of a wake for some meteors. The distortion of the video images can be neglected. We compare the results of the two identical camera systems and find systematic differences. For example, the peak magnitude distribution for ICC9 is shifted by about 0.2-0.4 mag towards fainter magnitudes. This can be explained by the different pointing directions of the cameras. Since both cameras monitor the same volume in the atmosphere roughly between the two islands of Tenerife and La Palma, one camera (ICC7) points towards the west, the other one (ICC9) to the east. In particular, in the morning hours the apex source is close to the field-of-view of ICC9. Thus, these meteors appear slower, increasing the dwell time on a pixel. This is favourable for the detection of a meteor of a given magnitude.

  14. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... machine learning to build predictive models of the virtual camera behaviour. The perfor- mance of the models on unseen data reveals accuracies above 70% for all the player behaviour types identified. The characteristics of the gener- ated models, their limits and their use for creating adaptive automatic...

  15. Studying complex decision making in natural settings: using a head-mounted video camera to study competitive orienteering.

    Science.gov (United States)

    Omodei, M M; McLennan, J

    1994-12-01

    Head-mounted video recording is described as a potentially powerful method for studying decision making in natural settings. Most alternative data-collection procedures are intrusive and disruptive of the decision-making processes involved while conventional video-recording procedures are either impractical or impossible. As a severe test of the robustness of the methodology we studied the decision making of 6 experienced orienteers who carried a head-mounted light-weight video camera as they navigated, running as fast as possible, around a set of control points in a forest. Use of the Wilcoxon matched-pairs signed-ranks test indicated that compared with free recall, video-assisted recall evoked (a) significantly greater experiential immersion in the recall, (b) significantly more specific recollections of navigation-related thoughts and feelings, (c) significantly more realizations of map and terrain features and aspects of running speed which were not noticed at the time of actual competition, and (d) significantly greater insight into specific navigational errors and the intrusion of distracting thoughts into the decision-making process. Potential applications of the technique in (a) the environments of emergency services, (b) therapeutic contexts, (c) education and training, and (d) sports psychology are discussed.

  16. Performance Test of the First Prototype of 2 Ways Video Camera for the Muon Barrel Position Monitor

    CERN Document Server

    Brunel, Laurent; Bondar, Tamas; Bencze, Gyorgy; Raics, Peter; Szabó, Jozsef

    1998-01-01

    The CMS Barrel Position Monitor is based on 360 video cameras mounted on 36 very stable mechanical structures. One type of camera is used to observe optical sources mounted on the muon chambers. A first prototype was produced to test the main performances. This report gives the experimental results about stability, linearity and temperature effects.

  17. An explanation for camera perspective bias in voluntariness judgment for video-recorded confession: Suggestion of cognitive frame.

    Science.gov (United States)

    Park, Kwangbai; Pyo, Jimin

    2012-06-01

    Three experiments were conducted to test the hypothesis that difference in voluntariness judgment for a custodial confession filmed in different camera focuses ("camera perspective bias") could occur because a particular camera focus conveys a suggestion of a particular cognitive frame. In Experiment 1, 146 juror eligible adults in Korea showed a camera perspective bias in voluntariness judgment with a simulated confession filmed with two cameras of different focuses, one on the suspect and the other on the detective. In Experiment 2, the same bias in voluntariness judgment emerged without cameras when the participants were cognitively framed, prior to listening to the audio track of the videos used in Experiment 1, by instructions to make either a voluntariness judgment for a confession or a coerciveness judgment for an interrogation. In Experiment 3, the camera perspective bias in voluntariness judgment disappeared when the participants viewing the video focused on the suspect were initially framed to make coerciveness judgment for the interrogation and the participants viewing the video focused on the detective were initially framed to make voluntariness judgment for the confession. The results in combination indicated that a particular camera focus may convey a suggestion of a particular cognitive frame in which a video-recorded confession/interrogation is initially represented. Some forensic and policy implications were discussed.

  18. Research on high-speed TDICCD remote sensing camera video signal processing

    Institute of Scientific and Technical Information of China (English)

    ZHANG Da; XU Shu-yan; MENG Qing-ju

    2009-01-01

    Video signal processing needs high signal-to-noise ratio (SNR) in high-speed time delay and integration charge coupled devices (TDICCD). To solve this problem, this article first analyzes the characteristics of the output video signal of a new type of high-speed TDICCD and its operation principle. Then it studies the correlation double sample (CDS) method of reducing noise. Following that a synthesized processing method is proposed, including correlation double sample, programmable gain control, line calibration and digital offset control, etc. Among the methods, XRD98L59 is a video signal processor for the charge coupled device (CCD). Application of this processor to one kind of high-speed TDICCD with eight output ports achieves perfect video images. The experiment result indicates that the SNR of the images reaches about 50 riB. The video signal processing for high-speed multi-channel TDICCD is implemented, which meets the required project index.

  19. Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera.

    Science.gov (United States)

    Miyamoto, Shimpei

    2016-06-01

    Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon's perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon's perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection.

  20. Method for pan-tilt camera calibration using single control point.

    Science.gov (United States)

    Li, Yunting; Zhang, Jun; Hu, Wenwen; Tian, Jinwen

    2015-01-01

    The pan-tilt (PT) camera is widely used in video surveillance systems due to its rotatable property and low cost. The rough output of a PT camera may not satisfy the demand of practical applications; hence an accurate calibration method of a PT camera is desired. However, high-precision camera calibration methods usually require sufficient control points not guaranteed in some practical cases of a PT camera. In this paper, we present a novel method to online calibrate the rotation angles of a PT camera by using only one control point. This is achieved by assuming that the intrinsic parameters and position of the camera are known in advance. More specifically, we first build a nonlinear PT camera model with respect to two parameters Pan and Tilt. We then convert the nonlinear model into a linear model according to sine and cosine of Tilt, where each element in the augmented coefficient matrix is a function of the single variable Pan. A closed-form solution of Pan and Tilt can then be derived by solving a quadratic equation of tangent of Pan. Our method is noniterative and does not need features matching; thus its time efficiency is better. We evaluate our calibration method on various synthetic and real data. The quantitative results demonstrate that the proposed method outperforms other state-of-the-art methods if the intrinsic parameters and position of the camera are known in advance.

  1. Research on the use and problems of digital video camera from the perspective of schools primary teacher of Granada province

    Directory of Open Access Journals (Sweden)

    Pablo José García Sempere

    2012-12-01

    Full Text Available The adoption of ICT in society and specifically in schools is changing the relationships and traditional means of teaching. These new situations require teachers to assume new roles and responsibilities, thereby creating new demands for training. The teaching body concurs that "teachers require both and initial and ongoing training in the use of digital video cameras and video editing." This article presents the main results of research that focused on the applications of digital video camera for teachers of primary education schools in the province of Granada, Spain.

  2. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  3. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    Science.gov (United States)

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  4. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    Science.gov (United States)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  5. People counting and re-identification using fusion of video camera and laser scanner

    Science.gov (United States)

    Ling, Bo; Olivera, Santiago; Wagley, Raj

    2016-05-01

    We present a system for people counting and re-identification. It can be used by transit and homeland security agencies. Under FTA SBIR program, we have developed a preliminary system for transit passenger counting and re-identification using a laser scanner and video camera. The laser scanner is used to identify the locations of passenger's head and shoulder in an image, a challenging task in crowed environment. It can also estimate the passenger height without prior calibration. Various color models have been applied to form color signatures. Finally, using a statistical fusion and classification scheme, passengers are counted and re-identified.

  6. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    DEFF Research Database (Denmark)

    Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen

    2003-01-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed...... at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over...... serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields...

  7. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  8. vid116_0501s -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  9. vid116_0501d -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  10. vid116_0501n -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  11. vid116_0501c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  12. Reflection imaging in the millimeter-wave range using a video-rate terahertz camera

    Science.gov (United States)

    Marchese, Linda E.; Terroux, Marc; Doucet, Michel; Blanchard, Nathalie; Pancrati, Ovidiu; Dufour, Denis; Bergeron, Alain

    2016-05-01

    The ability of millimeter waves (1-10 mm, or 30-300 GHz) to penetrate through dense materials, such as leather, wool, wood and gyprock, and to also transmit over long distances due to low atmospheric absorption, makes them ideal for numerous applications, such as body scanning, building inspection and seeing in degraded visual environments. Current drawbacks of millimeter wave imaging systems are they use single detector or linear arrays that require scanning or the two dimensional arrays are bulky, often consisting of rather large antenna-couple focal plane arrays (FPAs). Previous work from INO has demonstrated the capability of its compact lightweight camera, based on a 384 x 288 microbolometer pixel FPA with custom optics for active video-rate imaging at wavelengths of 118 μm (2.54 THz), 432 μm (0.69 THz), 663 μm (0.45 THz), and 750 μm (0.4 THz). Most of the work focused on transmission imaging, as a first step, but some preliminary demonstrations of reflection imaging at these were also reported. In addition, previous work also showed that the broadband FPA remains sensitive to wavelengths at least up to 3.2 mm (94 GHz). The work presented here demonstrates the ability of the INO terahertz camera for reflection imaging at millimeter wavelengths. Snapshots taken at video rates of objects show the excellent quality of the images. In addition, a description of the imaging system that includes the terahertz camera and different millimeter sources is provided.

  13. Real-time people counting system using a single video camera

    Science.gov (United States)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  14. Interaction Control Protocols for Distributed Multi-user Multi-camera Environments

    Directory of Open Access Journals (Sweden)

    Gareth W Daniel

    2003-10-01

    Full Text Available Video-centred communication (e.g., video conferencing, multimedia online learning, traffic monitoring, and surveillance is becoming a customary activity in our lives. The management of interactions in such an environment is a complicated HCI issue. In this paper, we present our study on a collection of interaction control protocols for distributed multiuser multi-camera environments. These protocols facilitate different approaches to managing a user's entitlement for controlling a particular camera. We describe a web-based system that allows multiple users to manipulate multiple cameras in varying remote locations. The system was developed using the Java framework, and all protocols discussed have been incorporated into the system. Experiments were designed and conducted to evaluate the effectiveness of these protocols, and to enable the identification of various human factors in a distributed multi-user and multi-camera environment. This work provides an insight into the complexity associated with the interaction management in video-centred communication. It can also serve as a conceptual and experimental framework for further research in this area.

  15. Playing Action Video Games Improves Visuomotor Control.

    Science.gov (United States)

    Li, Li; Chen, Rongrong; Chen, Jing

    2016-08-01

    Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non-action gamers. We then trained non-action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non-action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving.

  16. 基于Video Port的Camera Link的图像采集接口设计%Implementation of High-speed Camera Link Camera Interface Based on Video Port

    Institute of Scientific and Technical Information of China (English)

    丁杨

    2011-01-01

    A TMS320DM642 device is used as the core processor in the small-scale real-time image collection system. The design and implementation of the seamless connection between the Camera Link interface of a camera and the TMS320DM642 device through Video Port and Channel Link chips are discussed. The main objective of the design is to provide speed matching between the front-end collection and the back-end output in the high speed image collection system so as to implement the real time and high speed collection of image data stream.%通过视频接口和Channel Link芯片实现了数字信号处理芯片TMS320DM642与Camera Link线扫描相机的无缝连接.解决了图像数据输出速度为40 MB/s的高速图像数据采集系统中,前端采集与后端输出的速度匹配问题.系统可实时、高速地采集大量的图像数据.

  17. Optimizing Detection Rate and Characterization of Subtle Paroxysmal Neonatal Abnormal Facial Movements with Multi-Camera Video-Electroencephalogram Recordings.

    Science.gov (United States)

    Pisani, Francesco; Pavlidis, Elena; Cattani, Luca; Ferrari, Gianluigi; Raheli, Riccardo; Spagnoli, Carlotta

    2016-06-01

    Objectives We retrospectively analyze the diagnostic accuracy for paroxysmal abnormal facial movements, comparing one camera versus multi-camera approach. Background Polygraphic video-electroencephalogram (vEEG) recording is the current gold standard for brain monitoring in high-risk newborns, especially when neonatal seizures are suspected. One camera synchronized with the EEG is commonly used. Methods Since mid-June 2012, we have started using multiple cameras, one of which point toward newborns' faces. We evaluated vEEGs recorded in newborns in the study period between mid-June 2012 and the end of September 2014 and compared, for each recording, the diagnostic accuracies obtained with one-camera and multi-camera approaches. Results We recorded 147 vEEGs from 87 newborns and found 73 episodes of paroxysmal facial abnormal movements in 18 vEEGs of 11 newborns with the multi-camera approach. By using the single-camera approach, only 28.8% of these events were identified (21/73). Ten positive vEEGs with multicamera with 52 paroxysmal facial abnormal movements (52/73, 71.2%) would have been considered as negative with the single-camera approach. Conclusions The use of one additional facial camera can significantly increase the diagnostic accuracy of vEEGs in the detection of paroxysmal abnormal facial movements in the newborns.

  18. Modernization of B-2 Data, Video, and Control Systems Infrastructure

    Science.gov (United States)

    Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA s third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.

  19. Modernization of B-2 Data, Video, and Control Systems Infrastructure

    Science.gov (United States)

    Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA's third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.

  20. IP Camera Based Video Surveillance Using Object’s Boundary Specification

    Directory of Open Access Journals (Sweden)

    Natalia Chaudhry

    2016-08-01

    Full Text Available The ability to detect and track object of interest from sequence of frames is a critical and vital problem of many vision systems developed as yet. This paper presents a smart surveillance system that tracks objects of interest in a sequence of frames in their own defined respective boundaries. The objects of interest are registered or saved within the system. We have proposed a unique tracking algorithm using combination of SURF feature matching, Kalman filtering and template matching approach. Moreover, an efficient technique is proposed that is used to refine registered object image, extract object of interest and remove extraneous image area from it. The system will track registered objects in their respective boundaries using real time video generated through two IP cameras positioned in front of each other.

  1. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    Science.gov (United States)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  2. Wireless video monitoring and robot control in security applications

    Science.gov (United States)

    Nurkkala, Eero A.; Pyssysalo, Tino; Roning, Juha

    1998-10-01

    This research focuses on applications based on wireless monitoring and robot control, utilizing motion image and augmented reality. These applications include remote services and surveillance-related functions such as remote monitoring. A remote service can be, for example, a way to deliver products at a hospital or old people's home. Due to the mobile nature of the system, monitoring at places with privacy concerns is possible. On the other hand, mobility demands wireless communications. Suitable and present technologies for wireless video transfer are weighted. Identification of objects with the help of Radio Frequency Identifying (RFID) technology and facial recognition results in intelligent actions, for example, where the control of a robot does not require extensive workload from the user. In other words, tasks can be partially autonomous, RFID can be also used in augmentation of the video view with virtual objects. As a real-life experiment, a prototype environment is being constructed that consists of a robot equipped with a video camera and wireless links to the network and multimedia computer.

  3. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  4. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana

    2015-01-01

    Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001

  5. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  6. A camera space control system for an automated forklift

    Energy Technology Data Exchange (ETDEWEB)

    Miller, R.K.; Stewart, D.G.; Brockman, W.H. (Iowa State Univ., Ames, IA (United States)); Skaar, S.B. (Univ. of Notre Dame, IN (United States). Dept. of Aerospace and Mechanical Engineering)

    1994-10-01

    The authors present experimental results on a method of camera space control applied to a mobile cart with an on-board robot, operated as a forklift. The objective is to extend earlier results to the task of the precise and robust three-dimensional object placement. The method is illustrated with a box stacking task. Camera space control does not rely on producing absolute position measurements. All measurements, estimates and control criteria are done relative to camera images in units of pixels. The resulting ''camera space'' technique is found to be very robust, i.e., extremely accurate modeling and calibration are not needed in order to achieve a precise result.

  7. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  8. Observation of cloud-to-ground lightning channels with high-speed video camera

    CERN Document Server

    Buguet, M; Blanchet, P; Pédeboy, S; Barnéoud, P; Laroche, P

    2014-01-01

    Between May and October 2013 (period of sustained thunderstorm activity in France), several cloud-to-ground lightning flashes have been observed in Paris area with a high-speed video camera (14000 frames per second). The localization and the polarity of the recorded cloud-to-ground flashes have been obtained from the French lightning detection network M{\\'e}t{\\'e}orage which is equipped with the same low frequency sensors used by the US NLDN. In this paper we focused on 7 events (3 positive cloud-to-ground lightning flashes and 4 negative cloud-to-ground lightning flashes). The propagation velocity of the leaders and its temporal evolution have been estimated; the evolution of branching of the negative leaders have been observed during the propagation of the channel which get connected to ground and initiate the first return stroke. One aim of this preliminary study is to emphasize the differences between the characteristics of the positive and of the negative leaders.

  9. Detection of kinematics parameters of index finger movement with high-speed video camera

    Institute of Scientific and Technical Information of China (English)

    Hou Wensheng; Jiang Yingtao; Wu Xiaoying; Zheng Xiaolin; Zheng Jun; Ye Yihong

    2008-01-01

    Synergic movement of finger's joints provides human hand tremendous dexterities, and the detection of ki-nematics parameters is critical to describe and evaluate the kinesiology functions of the fingers. The present work is the attempt to investigate how the angular velocity and angular acceleration of the joints of index finger vary with re-spect to time during conducting a motor task. A high-speed video camera has been employed to visually record the movement of index finger, and miniaturized (5-mm diameter) reflective markers have affixed to the subject's index finger on the side close to thumb and dorsum of thumb at different joint landmarks. Captured images have been re-viewed frame by frame to get the coordinate values of each joint, and the angular displacements, angular velocities and angular acceleration can be obtained with triangle function. The experiment results show that the methods here can detect the kinematics parameters of index finger joints during moving, and can be a valid route to study the motor function of index finger.

  10. Remote control of a streak camera for real time bunch size measurement in LEP

    CERN Document Server

    Burns, A J; De Vries, J C

    1995-01-01

    A double sweep streak camera, built by industry according to CERN specifications, has been used for a number of years to provide real time three-dimensional measurements of bunches in LEP, by means of a dedicated synchrotron light source. Originally requiring local manipulation in an underground lab close to the LEP tunnel, the camera can now be fully operated via the control system network. Control functions, such as the adjustment of lens and mirror positions, the selection of camera weep speeds, and the setting of 12 ps resolution trigger timing, are handled by various networked VME systems, as is real time image processing. Bunch dimension averages are transferred every few seconds via the control system to the LEP measurement database, and a dedicated high bandwidth video transmission allows the streak camera images and processed results to be viewed in real time (at 25 Hz) in the LEP control room. Feedback control loops for light intensity, trigger timing and image tracking allow the setup to provide us...

  11. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  12. A novel method to obtain accurate length estimates of carnivorous reef fishes from a single video camera

    Directory of Open Access Journals (Sweden)

    Gastón A. Trobbiani

    Full Text Available In the last years, technological advances enhanced the utilization of baited underwater video (BUV to monitor the diversity, abundance, and size composition of fish assemblages. However, attempts to use static single-camera devices to estimate fish length were limited due to high errors, originated from the variable distance between the fishes and the reference scale included in the scene. In this work, we present a novel simple method to obtain accurate length estimates of carnivorous fishes by using a single downward-facing camera baited video station. The distinctive feature is the inclusion of a mirrored surface at the base of the stand that allows for correcting the apparent or "naive" length of the fish by the distance between the fish and the reference scale. We describe the calibration procedure and compare the performance (accuracy and precision of this new technique with that of other single static camera methods. Overall, estimates were highly accurate (mean relative error = -0.6% and precise (mean coefficient of variation = 3.3%, even in the range of those obtained with stereo-video methods.

  13. Control of the movement of a ROV camera; Controle de posicionamento da camera de um ROV

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alexandre S. de; Dutra, Max Suell [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Reis, Ney Robinson S. dos [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas; Santos, Auderi V. dos [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2004-07-01

    The ROV's (Remotely Operated Vehicles) are used for installation and maintenance of underwater exploration systems in the oil industry. These systems are operated in distant areas thus being of essential importance the use of a cameras for the visualization of the work area. The synchronization necessary in the accomplishment of the tasks when operating the manipulator and the movement of the camera for the operator is a complex task. For the accomplishment of this synchronization is presented in this work the analysis of the interconnection of the systems. The concatenation of the systems is made through the interconnection of the electric signals of the proportional valves of the actuators of the manipulator with the signals of the proportional valves of the actuators of the camera. With this interconnection the approach accompaniment of the movement of the manipulator for the camera, keeping the object of the visualization of the field of vision of the operator is obtained. (author)

  14. Video content analysis on body-worn cameras for retrospective investigation

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Haar, F.B. ter; Eendebak, P.T.; Hollander, R.J.M. den; Burghouts, G.J.; Wijn, R.; Broek, S.P. van den; Rest, J.H.C. van

    2015-01-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications a

  15. HDR {sup 192}Ir source speed measurements using a high speed video camera

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, Gabriel P. [Instituto de Pesquisas Energéticas e Nucleares—IPEN-CNEN/SP, São Paulo 05508-000, Brazil and Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Viana, Rodrigo S. S.; Yoriyaz, Hélio [Instituto de Pesquisas Energéticas e Nucleares—IPEN-CNEN/SP, São Paulo 05508-000 (Brazil); Podesta, Mark [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Rubo, Rodrigo A.; Sales, Camila P. de [Hospital das Clínicas da Universidade de São Paulo—HC/FMUSP, São Paulo 05508-000 (Brazil); Reniers, Brigitte [Department of Radiation Oncology - MAASTRO, GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Research Group NuTeC, CMK, Hasselt University, Agoralaan Gebouw H, Diepenbeek B-3590 (Belgium); Verhaegen, Frank, E-mail: frank.verhaegen@maastro.nl [Department of Radiation Oncology - MAASTRO, GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montréal, Québec H3G 1A4 (Canada)

    2015-01-15

    Purpose: The dose delivered with a HDR {sup 192}Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a {sup 192}Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases.

  16. Robust Visual Control of Parallel Robots under Uncertain Camera Orientation

    Directory of Open Access Journals (Sweden)

    Miguel A. Trujano

    2012-10-01

    Full Text Available This work presents a stability analysis and experimental assessment of a visual control algorithm applied to a redundant planar parallel robot under uncertainty in relation to camera orientation. The key feature of the analysis is a strict Lyapunov function that allows the conclusion of asymptotic stability without invoking the Barbashin-Krassovsky-LaSalle invariance theorem. The controller does not rely on velocity measurements and has a structure similar to a classic Proportional Derivative control algorithm. Experiments in a laboratory prototype show that uncertainty in camera orientation does not significantly degrade closed-loop performance.

  17. Fast auto-acquisition tomography tilt series by using HD video camera in ultra-high voltage electron microscope.

    Science.gov (United States)

    Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto

    2014-11-01

    The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt

  18. A novel simultaneous dynamic range compression and local contrast enhancement algorithm for digital video cameras

    Directory of Open Access Journals (Sweden)

    Tsai Chi-Yi

    2011-01-01

    Full Text Available Abstract This article addresses the problem of low dynamic range image enhancement for commercial digital cameras. A novel simultaneous dynamic range compression and local contrast enhancement algorithm (SDRCLCE is presented to resolve this problem in a single-stage procedure. The proposed SDRCLCE algorithm is able to combine with many existent intensity transfer functions, which greatly increases the applicability of the proposed method. An adaptive intensity transfer function is also proposed to combine with SDRCLCE algorithm that provides the capability to adjustably control the level of overall lightness and contrast achieved at the enhanced output. Moreover, the proposed method is amenable to parallel processing implementation that allows us to improve the processing speed of SDRCLCE algorithm. Experimental results show that the performance of the proposed method outperforms three state-of-the-art methods in terms of dynamic range compression and local contrast enhancement.

  19. Twente Optical Perfusion Camera: system overview and performance for video rate laser Doppler perfusion imaging

    NARCIS (Netherlands)

    M. Draijer; E. Hondebrink; T. van Leeuwen; W. Steenbergen

    2009-01-01

    We present the Twente Optical Perfusion Camera (TOPCam), a novel laser Doppler Perfusion Imager based on CMOS technology. The tissue under investigation is illuminated and the resulting dynamic speckle pattern is recorded with a high speed CMOS camera. Based on an overall analysis of the signal-to-n

  20. Twente Optical Perfusion Camera: system overview and performance for video rate laser Doppler perfusion imaging

    NARCIS (Netherlands)

    Draijer, M.; Hondebrink, E.; van Leeuwen, T.; Steenbergen, W.

    2009-01-01

    We present the Twente Optical Perfusion Camera (TOPCam), a novel laser Doppler Perfusion Imager based on CMOS technology. The tissue under investigation is illuminated and the resulting dynamic speckle pattern is recorded with a high speed CMOS camera. Based on an overall analysis of the

  1. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    Energy Technology Data Exchange (ETDEWEB)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  2. Lights, Camera, Action: Advancing Learning, Research, and Program Evaluation through Video Production in Educational Leadership Preparation

    Science.gov (United States)

    Friend, Jennifer; Militello, Matthew

    2015-01-01

    This article analyzes specific uses of digital video production in the field of educational leadership preparation, advancing a three-part framework that includes the use of video in (a) teaching and learning, (b) research methods, and (c) program evaluation and service to the profession. The first category within the framework examines videos…

  3. Lights, Camera, Action! Learning about Management with Student-Produced Video Assignments

    Science.gov (United States)

    Schultz, Patrick L.; Quinn, Andrew S.

    2014-01-01

    In this article, we present a proposal for fostering learning in the management classroom through the use of student-produced video assignments. We describe the potential for video technology to create active learning environments focused on problem solving, authentic and direct experiences, and interaction and collaboration to promote student…

  4. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  5. Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.

    Science.gov (United States)

    Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph

    2016-04-18

    We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos.

  6. Indoor 3D Video Monitoring Using Multiple Kinect Depth-Cameras

    Directory of Open Access Journals (Sweden)

    M. Martínez-Zarzuela

    2014-02-01

    Full Text Available This article describes the design and development of a system for remote indoor 3D monitoring using an undetermined number of Microsoft® Kinect sensors. In the proposed client-server system, the Kinect cameras can be connected to different computers, addressing this way the hardware limitation of one sensor per USB controller. The reason behind this limitation is the high bandwidth needed by the sensor, which becomes also an issue for the distributed system TCP/IP communications. Since traffic volume is too high, 3D data has to be compressed before it can be sent over the network. The solution consists in selfcoding the Kinect data into RGB images and then using a standard multimedia codec to compress color maps. Information from different sources is collected into a central client computer, where point clouds are transformed to reconstruct the scene in 3D. An algorithm is proposed to merge the skeletons detected locally by each Kinect conveniently, so that monitoring of people is robust to self and inter-user occlusions. Final skeletons are labeled and trajectories of every joint can be saved for event reconstruction or further analysis.

  7. Real-Time Range Sensing Video Camera for Human/Robot Interfacing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In comparison to stereovision, it is well known that structured-light illumination has distinct advantages including the use of only one camera, being significantly...

  8. High-speed radiometric imaging with a gated, intensified, digitally controlled camera

    Science.gov (United States)

    Ross, Charles C.; Sturz, Richard A.

    1997-05-01

    The development of an advanced instrument for real-time radiometric imaging of high-speed events is described. The Intensified Digitally-Controlled Gated (IDG) camera is a microprocessor-controlled instrument based on an intensified CCD that is specifically designed to provide radiometric optical data. The IDG supports a variety of camera- synchronous and camera-asynchronous imaging tasks in both passive imaging and active laser range-gated applications. It features both automatic and manual modes of operation, digital precision and repeatability, and ease of use. The IDG produces radiometric imagery by digitally controlling the instrument's optical gain and exposure duration, and by encoding and annotating the parameters necessary for radiometric analysis onto the resultant video signal. Additional inputs, such as date, time, GPS, IRIG-B timing, and other data can also be encoded and annotated. The IDG optical sensitivity can be readily calibrated, with calibration data tables stored in the camera's nonvolatile flash memory. The microprocessor then uses this data to provide a linear, calibrated output. The IDG possesses both synchronous and asynchronous imaging modes in order to allow internal or external control of exposure, timing, and direct interface to external equipment such as event triggers and frame grabbers. Support for laser range-gating is implemented by providing precise asynchronous CCD operation and nanosecond resolution of the intensifier photocathode gate duration and timing. Innovative methods used to control the CCD for asynchronous image capture, as well as other sensor and system considerations relevant to high-speed imaging are discussed in this paper.

  9. Lights, camera, action…critique? Submit videos to AGU communications workshop

    Science.gov (United States)

    Viñas, Maria-José

    2011-08-01

    What does it take to create a science video that engages the audience and draws thousands of views on YouTube? Those interested in finding out should submit their research-related videos to AGU's Fall Meeting science film analysis workshop, led by oceanographer turned documentary director Randy Olson. Olson, writer-director of two films (Flock of Dodos: The Evolution-Intelligent Design Circus and Sizzle: A Global Warming Comedy) and author of the book Don't Be Such a Scientist: Talking Substance in an Age of Style, will provide constructive criticism on 10 selected video submissions, followed by moderated discussion with the audience. To submit your science video (5 minutes or shorter), post it on YouTube and send the link to the workshop coordinator, Maria-José Viñas (mjvinas@agu.org), with the following subject line: Video submission for Olson workshop. AGU will be accepting submissions from researchers and media officers of scientific institutions until 6:00 P.M. eastern time on Friday, 4 November. Those whose videos are selected to be screened will be notified by Friday, 18 November. All are welcome to attend the workshop at the Fall Meeting.

  10. Multiple Traffic Control Using Wireless Sensor and Density Measuring Camera

    Directory of Open Access Journals (Sweden)

    Amrita RAI

    2008-07-01

    Full Text Available In the present scenario vehicular travel is increasing all over the world, especially in large urban areas. Therefore for simulating and optimizing traffic control to better accommodate this increasing demand is arises. In this paper we studied the optimization of traffic light controller in a City using wireless sensor and CCTV (Camera. We have proposed a traffic light controller and simulator that allows us to study different situation of traffic density in City and controlling the traffic of entire City by visual monitoring using CCTV. Using wireless sensor we can easily senses the density of traffic because the general architecture of wireless sensor network is an infrastructure less communication network.

  11. Spatial and temporal scales of shoreline morphodynamics derived from video camera observations for the island of Sylt, German Wadden Sea

    Science.gov (United States)

    Blossier, Brice; Bryan, Karin R.; Daly, Christopher J.; Winter, Christian

    2017-04-01

    Spatial and temporal scales of beach morphodynamics were assessed for the island of Sylt, German Wadden Sea, based on continuous video camera monitoring data from 2011 to 2014 along a 1.3 km stretch of sandy beach. They served to quantify, at this location, the amount of shoreline variability covered by beach monitoring schemes, depending on the time interval and alongshore resolution of the surveys. Correlation methods, used to quantify the alongshore spatial scales of shoreline undulations, were combined with semi-empirical modelling and spectral analyses of shoreline temporal fluctuations. The data demonstrate that an alongshore resolution of 150 m and a monthly survey time interval capture 70% of the kilometre-scale shoreline variability over the 2011-2014 study period. An alongshore spacing of 10 m and a survey time interval of 5 days would be required to monitor 95% variance of the shoreline temporal fluctuations with steps of 5% changes in variance over space. Although monitoring strategies such as land or airborne surveying are reliable methods of data collection, video camera deployment remains the cheapest technique providing the high spatiotemporal resolution required to monitor subkilometre-scale morphodynamic processes involving, for example, small- to middle-sized beach nourishment.

  12. Spatial and temporal scales of shoreline morphodynamics derived from video camera observations for the island of Sylt, German Wadden Sea

    Science.gov (United States)

    Blossier, Brice; Bryan, Karin R.; Daly, Christopher J.; Winter, Christian

    2016-08-01

    Spatial and temporal scales of beach morphodynamics were assessed for the island of Sylt, German Wadden Sea, based on continuous video camera monitoring data from 2011 to 2014 along a 1.3 km stretch of sandy beach. They served to quantify, at this location, the amount of shoreline variability covered by beach monitoring schemes, depending on the time interval and alongshore resolution of the surveys. Correlation methods, used to quantify the alongshore spatial scales of shoreline undulations, were combined with semi-empirical modelling and spectral analyses of shoreline temporal fluctuations. The data demonstrate that an alongshore resolution of 150 m and a monthly survey time interval capture 70% of the kilometre-scale shoreline variability over the 2011-2014 study period. An alongshore spacing of 10 m and a survey time interval of 5 days would be required to monitor 95% variance of the shoreline temporal fluctuations with steps of 5% changes in variance over space. Although monitoring strategies such as land or airborne surveying are reliable methods of data collection, video camera deployment remains the cheapest technique providing the high spatiotemporal resolution required to monitor subkilometre-scale morphodynamic processes involving, for example, small- to middle-sized beach nourishment.

  13. Game Cinematography: From Camera Control to Player Emotions

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2016-01-01

    Building on the definition of cinematography (Soanes and Stevenson, Oxford dictionary of English. Oxford University Press, Oxford/New York, 2005), game cinematography can be defined as the art of visualizing the content of a computer game. The relationship between game cinematography and its...... traditional counterpart is extremely tight as, in both cases, the aim of cinematography is to control the viewer’s perspective and affect his or her perception of the events represented. However, game events are not necessarily pre-scripted and player interaction has a major role on the quality of a game...... experience; therefore, the role of the camera and the challenges connected to it are different in game cinematography as the virtual camera has to both dynamically react to unexpected events to correctly convey the game story and take into consideration player actions and desires to support her interaction...

  14. Visual surveys can reveal rather different 'pictures' of fish densities: Comparison of trawl and video camera surveys in the Rockall Bank, NE Atlantic Ocean

    Science.gov (United States)

    McIntyre, F. D.; Neat, F.; Collie, N.; Stewart, M.; Fernandes, P. G.

    2015-01-01

    Visual surveys allow non-invasive sampling of organisms in the marine environment which is of particular importance in deep-sea habitats that are vulnerable to damage caused by destructive sampling devices such as bottom trawls. To enable visual surveying at depths greater than 200 m we used a deep towed video camera system, to survey large areas around the Rockall Bank in the North East Atlantic. The area of seabed sampled was similar to that sampled by a bottom trawl, enabling samples from the towed video camera system to be compared with trawl sampling to quantitatively assess the numerical density of deep-water fish populations. The two survey methods provided different results for certain fish taxa and comparable results for others. Fish that exhibited a detectable avoidance behaviour to the towed video camera system, such as the Chimaeridae, resulted in mean density estimates that were significantly lower (121 fish/km2) than those determined by trawl sampling (839 fish/km2). On the other hand, skates and rays showed no reaction to the lights in the towed body of the camera system, and mean density estimates of these were an order of magnitude higher (64 fish/km2) than the trawl (5 fish/km2). This is probably because these fish can pass under the footrope of the trawl due to their flat body shape lying close to the seabed but are easily detected by the benign towed video camera system. For other species, such as Molva sp, estimates of mean density were comparable between the two survey methods (towed camera, 62 fish/km2; trawl, 73 fish/km2). The towed video camera system presented here can be used as an alternative benign method for providing indices of abundance for species such as ling in areas closed to trawling, or for those fish that are poorly monitored by trawl surveying in any area, such as the skates and rays.

  15. Video-based realtime IMU-camera calibration for robot navigation

    Science.gov (United States)

    Petersen, Arne; Koch, Reinhard

    2012-06-01

    This paper introduces a new method for fast calibration of inertial measurement units (IMU) with cameras being rigidly coupled. That is, the relative rotation and translation between the IMU and the camera is estimated, allowing for the transfer of IMU data to the cameras coordinate frame. Moreover, the IMUs nuisance parameters (biases and scales) and the horizontal alignment of the initial camera frame are determined. Since an iterated Kalman Filter is used for estimation, information on the estimations precision is also available. Such calibrations are crucial for IMU-aided visual robot navigation, i.e. SLAM, since wrong calibrations cause biases and drifts in the estimated position and orientation. As the estimation is performed in realtime, the calibration can be done using a freehand movement and the estimated parameters can be validated just in time. This provides the opportunity of optimizing the used trajectory online, increasing the quality and minimizing the time effort for calibration. Except for a marker pattern, used for visual tracking, no additional hardware is required. As will be shown, the system is capable of estimating the calibration within a short period of time. Depending on the requested precision trajectories of 30 seconds to a few minutes are sufficient. This allows for calibrating the system at startup. By this, deviations in the calibration due to transport and storage can be compensated. The estimation quality and consistency are evaluated in dependency of the traveled trajectories and the amount of IMU-camera displacement and rotation misalignment. It is analyzed, how different types of visual markers, i.e. 2- and 3-dimensional patterns, effect the estimation. Moreover, the method is applied to mono and stereo vision systems, providing information on the applicability to robot systems. The algorithm is implemented using a modular software framework, such that it can be adopted to altered conditions easily.

  16. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Science.gov (United States)

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  17. Adaptive multifoveation for low-complexity video compression with a stationary camera perspective

    Science.gov (United States)

    Sankaran, Sriram; Ansari, Rashid; Khokhar, Ashfaq A.

    2005-03-01

    In human visual system the spatial resolution of a scene under view decreases uniformly at points of increasing distance from the point of gaze, also called foveation point. This phenomenon is referred to as foveation and has been exploited in foveated imaging to allocate bits in image and video coding according to spatially varying perceived resolution. Several digital image processing techniques have been proposed in the past to realize foveated images and video. In most cases a single foveation point is assumed in a scene. Recently there has been a significant interest in dynamic as well as multi-point foveation. The complexity involved in identification of foveation points is however significantly high in the proposed approaches. In this paper, an adaptive multi-point foveation technique for video data based on the concepts of regions of interests (ROIs) is proposed and its performance is investigated. The points of interest are assumed to be centroid of moving objects and dynamically determined by the foveation algorithm proposed. Fast algorithm for implementing region based multi-foveation processing is proposed. The proposed adaptive multi-foveation fully integrates with existing video codec standard in both spatial and DCT domain.

  18. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  19. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. I: Nucleation and growth model.

    Science.gov (United States)

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Tachibana, Masatoshi; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. has been investigated by image processing using a high-speed video camera. In order to express the temporal change in the stalk length resulting from the contraction, a damped spring model and a nucleation and growth model are applied. A double exponential is deduced from a conventional damped spring model, while a stretched exponential is newly proposed from a nucleation and growth model. The stretched exponential function is more suitable for the curve fitting and suggests a more particular contraction mechanism in which the contraction of the stalk begins near the cell body and spreads downwards along the stalk. The index value of the stretched exponential is evaluated in the range from 1 to 2 in accordance with the model in which the contraction undergoes through nucleation and growth in a one-dimensional space.

  20. Control method for video guidance sensor system

    Science.gov (United States)

    Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)

    2005-01-01

    A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are carried out. Further, acceptance of reset and diagnostic commands is permitted only when the system is in the standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.

  1. Galvanometer control system design of aerial camera motion compensation

    Science.gov (United States)

    Qiao, Mingrui; Cao, Jianzhong; Wang, Huawei; Guo, Yunzeng; Hu, Changchang; Tang, Hong; Niu, Yuefeng

    2015-10-01

    Aerial cameras exist the image motion on the flight. The image motion has seriously affected the image quality, making the image edge blurred and gray scale loss. According to the actual application situation, when high quality and high precision are required, the image motion compensation (IMC) should be adopted. This paper designs galvanometer control system of IMC. The voice coil motor as the actuator has a simple structure, fast dynamic response and high positioning accuracy. Double-loop feedback is also used. PI arithmetic and Hall sensors are used at the current feedback. Fuzzy-PID arithmetic and optical encoder are used at the speed feedback. Compared to conventional PID control arithmetic, the simulation results show that the control system has fast response and high control accuracy.

  2. Intraoperative bleeding control by uniportal video-assisted thoracoscopic surgery†.

    Science.gov (United States)

    Gonzalez-Rivas, Diego; Stupnik, Tomaz; Fernandez, Ricardo; de la Torre, Mercedes; Velasco, Carlos; Yang, Yang; Lee, Wentao; Jiang, Gening

    2016-01-01

    Owing to advances in video-assisted thoracic surgery (VATS), the majority of pulmonary resections can currently be performed by VATS in a safe manner with a low level of morbidity and mortality. The majority of the complications that occur during VATS can be minimized with correct preoperative planning of the case as well as careful pulmonary dissection. Coordination of the whole surgical team is essential when confronting an emergency such as major bleeding. This is particularly important during the VATS learning curve, where the occurrence of intraoperative complications, particularly significant bleeding, usually ends in a conversion to open surgery. However, conversion should not be considered as a failure of the VATS approach, but as a resource to maintain the patient's safety. The correct assessment of any bleeding is of paramount importance during major thoracoscopic procedures. Inadequate management of the source of bleeding may result in major vessel injury and massive bleeding. If bleeding occurs, a sponge stick should be readily available to apply pressure immediately to control the haemorrhage. It is always important to remain calm and not to panic. With the bleeding temporarily controlled, a decision must be made promptly as to whether a thoracotomy is needed or if the bleeding can be solved through the VATS approach. This will depend primarily on the surgeon's experience. The operative vision provided with high-definition cameras, specially designed or adapted instruments and the new sealants are factors that facilitate the surgeon's control. After experience has been acquired with conventional or uniportal VATS, the rate of complications diminishes and the majority of bleeding events are controlled without the need for conversion to thoracotomy.

  3. Research on Remote Video Monitoring System Used for Numerical Control Machine Tools Based on Embedded Technology

    Institute of Scientific and Technical Information of China (English)

    LIU Quan; QU Xuehong; ZHOU Henglin; LONG Yihong

    2006-01-01

    This paper designed an embedded video monitoring system using DSP(Digital Signal Processing) and ARM(Advanced RISC Machine). This system is an important part of self-service operation of numerical control machine tools. At first the analog input signals from the CCD(Charge Coupled Device) camera are transformed into digital signals, and then output to the DSP system, where the video sequence is encoded according to the new generation image compressing standard called H.264. The code will be transmitted to the ARM system through xBus, and then be packed in the ARM system and transmitted to the client port through the gateway. Web technology, embedded technology and image compressing as well as coding technology are integrated in the system, which can be widely used in self-service operation of numerical control machine tools and intelligent robot control areas.

  4. Using high speed smartphone cameras and video analysis techniques to teach mechanical wave physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-07-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses along a spring and the period of transverse standing waves generated in the same spring. These experiments can be helpful in addressing several relevant concepts about the physics of mechanical waves and in overcoming some of the typical student misconceptions in this same field.

  5. Illusory control, gambling, and video gaming: an investigation of regular gamblers and video game players.

    Science.gov (United States)

    King, Daniel L; Ejova, Anastasia; Delfabbro, Paul H

    2012-09-01

    There is a paucity of empirical research examining the possible association between gambling and video game play. In two studies, we examined the association between video game playing, erroneous gambling cognitions, and risky gambling behaviour. One hundred and fifteen participants, including 65 electronic gambling machine (EGM) players and 50 regular video game players, were administered a questionnaire that examined video game play, gambling involvement, problem gambling, and beliefs about gambling. We then assessed each groups' performance on a computerised gambling task that involved real money. A post-game survey examined perceptions of the skill and chance involved in the gambling task. The results showed that video game playing itself was not significantly associated with gambling involvement or problem gambling status. However, among those persons who both gambled and played video games, video game playing was uniquely and significantly positively associated with the perception of direct control over chance-based gambling events. Further research is needed to better understand the nature of this association, as it may assist in understanding the impact of emerging digital gambling technologies.

  6. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  7. Introducing Contactless Blood Pressure Assessment Using a High Speed Video Camera.

    Science.gov (United States)

    Jeong, In Cheol; Finkelstein, Joseph

    2016-04-01

    Recent studies demonstrated that blood pressure (BP) can be estimated using pulse transit time (PTT). For PTT calculation, photoplethysmogram (PPG) is usually used to detect a time lag in pulse wave propagation which is correlated with BP. Until now, PTT and PPG were registered using a set of body-worn sensors. In this study a new methodology is introduced allowing contactless registration of PTT and PPG using high speed camera resulting in corresponding image-based PTT (iPTT) and image-based PPG (iPPG) generation. The iPTT value can be potentially utilized for blood pressure estimation however extent of correlation between iPTT and BP is unknown. The goal of this preliminary feasibility study was to introduce the methodology for contactless generation of iPPG and iPTT and to make initial estimation of the extent of correlation between iPTT and BP "in vivo." A short cycling exercise was used to generate BP changes in healthy adult volunteers in three consecutive visits. BP was measured by a verified BP monitor simultaneously with iPTT registration at three exercise points: rest, exercise peak, and recovery. iPPG was simultaneously registered at two body locations during the exercise using high speed camera at 420 frames per second. iPTT was calculated as a time lag between pulse waves obtained as two iPPG's registered from simultaneous recoding of head and palm areas. The average inter-person correlation between PTT and iPTT was 0.85 ± 0.08. The range of inter-person correlations between PTT and iPTT was from 0.70 to 0.95 (p high speed camera can be potentially utilized for unobtrusive contactless monitoring of abrupt blood pressure changes in a variety of settings. The initial prototype system was able to successfully generate approximation of pulse transit time and showed high intra-individual correlation between iPTT and BP. Further investigation of the proposed approach is warranted.

  8. Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea: video evidence from animal-borne cameras.

    Directory of Open Access Journals (Sweden)

    Susan G Heaslip

    Full Text Available The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate correlate with the daytime foraging behavior of leatherbacks (n = 19 in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08-3:38 h, and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata was the dominant prey (83-100%, but moon jellyfish (Aurelia aurita were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models. Handling time increased with prey size regardless of prey species (p = 0.0001. Estimates of energy intake averaged 66,018 kJ • d(-1 but were as high as 167,797 kJ • d(-1 corresponding to turtles consuming an average of 330 kg wet mass • d(-1 (up to 840 kg • d(-1 or approximately 261 (up to 664 jellyfish • d(-1. Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass • d(-1 equating to an average energy intake of 3-7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to

  9. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves......This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... on real-time motion planning of the camera. Moreover, the recasting of camera constraints into potential fields is visually more accessible to game designers and has the potential to be implemented as a plug-in to 3D level design and editing tools currently avail- able with games. Introduction...

  10. Lights, camera, action… spotlight on trauma video review: an underutilized means of quality improvement and education.

    Science.gov (United States)

    Rogers, Steven C; Dudley, Nanette C; McDonnell, William; Scaife, Eric; Morris, Stephen; Nelson, Douglas

    2010-11-01

    Trauma video review (TVR) is an effective method of quality improvement and education. The objective of this study was to determine TVR practices in the United States and use of TVR for quality improvement and education. Adult and pediatric trauma centers identified by the American College of Surgeons (n = 102) and the National Association of Children's Hospitals and Related Institutions (n = 24) were surveyed by telephone. Surveys included questions regarding program demographics, residency information, and past/present TVR practices. One hundred eight trauma centers (86%) were contacted, and 99% (107/108) completed surveys. Of the surveyed centers, 34% never used TVR; 37% previously used TVR and had discontinued at the time of the survey, with most reporting legal/privacy concerns; 20% were currently using TVR; and 9% were planning to use TVR in the future. Nineteen percent (14/73) of general trauma centers are using or planning to use TVR compared with 50% (17/34) of pediatric centers (P = 0.001). One hundred percent of current TVR programs report that TVR improves the trauma resuscitation process.Most pediatric emergency medicine (87%), emergency medicine (89%), and surgery (97%) trainees participate in trauma resuscitation at trauma centers. Fifty-two percent of centers using TVR report trainee attendance at TVR process/conference; 38% specifically use TVR for resident education. All current TVR programs report that it improves their trauma processes. More pediatric trauma centers report planning future TVR programs, but the implication of such plans remains unclear. Opportunities exist for expanded use of TVR for resident education.

  11. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    Science.gov (United States)

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p competency. Prognostic study, level II.

  12. Detection, Deterrence, Docility: Techniques of Control by Surveillance Cameras

    NARCIS (Netherlands)

    Balamir, S.

    2013-01-01

    In spite of the growing omnipresence of surveillance cameras, not much is known by the general public about their background. While many disciplines have scrutinised the techniques and effects of surveillance, the object itself remains somewhat of a mystery. A design typology of surveillance cameras

  13. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. II: Viscosity effect of several types of polymer additives

    OpenAIRE

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. in polymer solutions with various viscosities has been investigated by image processing using a high-speed video camera. The viscosity of the external fluid ranges from 1 to 5mPa·s for different polymer additives such as hydroxypropyl cellulose, polyethylene oxide, and Ficoll. The temporal change in the contraction length of Vorticella sp. in various macromolecular solutions is fitted well by a stretched exponential function based on the nuclea...

  14. Modeling 3D Unknown object by Range Finder and Video Camera and Updating of a 3D Database by a Single Camera View

    National Research Council Canada - National Science Library

    Nzie, C; Triboulet, J; Mallem, Malik; Chavand, F

    2005-01-01

    The device consists of a camera which gives the HO an indirect view of a scene (real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment...

  15. Use of KLV to combine metadata, camera sync, and data acquisition into a single video record

    Science.gov (United States)

    Hightower, Paul

    2015-05-01

    SMPTE has designed in significant data spaces in each frame that may be used to store time stamps and other time sensitive data. There are metadata spaces in both the analog equivalent of the horizontal blanking referred to as the Horizontal Ancillary (HANC) space and in the analog equivalent of the vertical interval blanking lines referred to as the Vertical Ancillary (VANC) space. The HANC space is very crowded with many data types including information about frame rate and format, 16 channels of audio sound bites, copyright controls, billing information and more than 2,000 more elements. The VANC space is relatively unused by cinema and broadcasters which makes it a prime target for use in test, surveillance and other specialized applications. Taking advantage of the SMPTE structures, one can design and implement custom data gathering and recording systems while maintaining full interoperability with standard equipment. The VANC data space can be used to capture image relevant data and can be used to overcome transport latency and diminished image quality introduced by the use of compression.

  16. Evaluating the Effects of Camera Perspective in Video Modeling for Children with Autism: Point of View versus Scene Modeling

    Science.gov (United States)

    Cotter, Courtney

    2010-01-01

    Video modeling has been used effectively to teach a variety of skills to children with autism. This body of literature is characterized by a variety of procedural variations including the characteristics of the video model (e.g., self vs. other, adult vs. peer). Traditionally, most video models have been filmed using third person perspective…

  17. Speed cameras, section control, and kangaroo jumps-a meta-analysis.

    Science.gov (United States)

    Høye, Alena

    2014-12-01

    A meta-analysis was conducted of the effects of speed cameras and section control (point-to-point speed cameras) on crashes. 63 effect estimates from 15 speed camera studies and five effect estimates from four section control studies were included in the analysis. Speed cameras were found to reduce total crash numbers by about 20%. The effect declines with increasing distance from the camera location. Fatal crashes were found to be reduced by 51%, this result may however be affected by regression to the mean (RTM). Section control was found to have a greater crash reducing effect than speed cameras (-30% for total crash numbers and -56% for KSI crashes). There is no indication that these results (except the one for the effect of speed cameras on fatal crashes) are affected by regression to the mean, publication bias or outlier bias. The results indicate that kangaroo driving (braking and accelerating) occurs, but no adverse effects on speed or crashes were found. Crash migration, i.e., an increase of crash numbers on other roads due to rerouting of traffic, may occur in some cases at speed cameras, but the results do not indicate that such effects are common. Both speed cameras and section control were found to achieve considerable speed reductions and the crash effects that were found in meta-analysis are of a similar size or greater than one might expect based on the effects on speed.

  18. Vision system for driving control using camera mounted on an automatic vehicle. Jiritsu sokosha no camera ni yoru shikaku system

    Energy Technology Data Exchange (ETDEWEB)

    Nishimori, K.; Ishihara, K.; Tokutaka, H.; Kishida, S.; Fujimura, K. (Tottori University, Tottori (Japan). Faculty of Engineering); Okada, M. (Mazda Corp., Hiroshima (Japan)); Hirakawa, S. (Fujitsu Corp., Tokyo (Japan))

    1993-11-30

    The present report explains a vision system, in which a CCD camera, used for the model vehicle automatically traveling by fuzzy control, is used as a vision sensor. The vision system is composed of input image processing module, situation recognition/analysis module to three-dimensionally recover the road, route-selecting navigation module to avoid the obstacle and vehicle control module. The CCD camera is used as a vision sensor to make the model vehicle automatically travel by fuzzy control with the above modules. In the present research, the traveling is controlled by treating the position and configuration of objective in image as a fuzzy inferential variable. Based on the above method, the traveling simulation gave the following knowledge: even with the image information only from the vision system, the application of fuzzy control facilitates the traveling. If the objective is clearly known, the control is judged able to be made even from vague image which does not necessitate the exact locative information. 4 refs., 11 figs.

  19. Control and protection of outdoor embedded camera for astronomy

    Science.gov (United States)

    Rigaud, F.; Jegouzo, I.; Gaudemard, J.; Vaubaillon, J.

    2012-09-01

    The purpose of CABERNET- Podet-Met (CAmera BEtter Resolution NETwork, Pole sur la Dynamique de l'Environnement Terrestre - Meteor) project is the automated observation, by triangulation with three cameras, of meteor showers to perform a calculation of meteoroids trajectory and velocity. The scientific goal is to search the parent body, comet or asteroid, for each observed meteor. To install outdoor cameras in order to perform astronomy measurements for several years with high reliability requires a very specific design for the box. For these cameras, this contribution shows how we fulfilled the various functions of their boxes, such as cooling of the CCD, heating to melt snow and ice, the protecting against moisture, lightning and Solar light. We present the principal and secondary functions, the product breakdown structure, the technical solutions evaluation grid of criteria, the adopted technology products and their implementation in multifunction subsets for miniaturization purpose. To manage this project, we aim to get the lowest manpower and development time for every part. In appendix, we present the measurements the image quality evolution during the CCD cooling, and some pictures of the prototype.

  20. Comparison of three different techniques for camera and motion control of a teleoperated robot.

    Science.gov (United States)

    Doisy, Guillaume; Ronen, Adi; Edan, Yael

    2017-01-01

    This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface.

  1. Joint rate control and scheduling for wireless uplink video streaming

    Institute of Scientific and Technical Information of China (English)

    HUANG Jian-wei; LI Zhu; CHIANG Mung; KATSAGGELOS Aggelos K.

    2006-01-01

    We solve the problem of uplink video streaming in CDMA cellular networks by jointly designing the rate control and scheduling algorithms. In the pricing-based distributed rate control algorithm, the base station announces a price for the per unit average rate it can support, and the mobile devices choose their desired average transmission rates by balancing their video quality and cost of transmission. Each mobile device then determines the specific video frames to transmit by a video summarization process. In the time-division-multiplexing (TDM) scheduling algorithm, the base station collects the information on frames to be transmitted from all devices within the current time window, sorts them in increasing order of deadlines, and schedules the transmissions in a TDM fashion. This joint algorithm takes advantage of the multi-user content diversity, and maximizes the network total utility (i.e., minimize the network total distortion), while satisfying the delivery deadline constraints. Simulations showed that the proposed algorithm significantly outperforms the constant rate provision algorithm.

  2. Design, Development and Testing of the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) Guidance, Navigation and Control System

    Science.gov (United States)

    Wagenknecht, J.; Fredrickson, S.; Manning, T.; Jones, B.

    2003-01-01

    Engineers at NASA Johnson Space Center have designed, developed, and tested a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spaceflight activities. The technology demonstration system, known as the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam), has been integrated into the approximate form and function of a flight system. The primary focus has been to develop a system capable of providing external views of the International Space Station. The Mini AERCam system is spherical-shaped and less than eight inches in diameter. It has a full suite of guidance, navigation, and control hardware and software, and is equipped with two digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations. Tests have been performed in both a six degree-of-freedom closed-loop orbital simulation and on an air-bearing table. The Mini AERCam system can also be used as a test platform for evaluating algorithms and relative navigation for autonomous proximity operations and docking around the Space Shuttle Orbiter or the ISS.

  3. Vacuum Camera Cooler

    Science.gov (United States)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  4. Camera Operator and Videographer

    Science.gov (United States)

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  5. The photothermal camera - a new non destructive inspection tool; La camera photothermique - une nouvelle methode de controle non destructif

    Energy Technology Data Exchange (ETDEWEB)

    Piriou, M. [AREVA NP Centre Technique SFE - Zone Industrielle et Portuaire Sud - BP13 - 71380 Saint Marcel (France)

    2007-07-01

    The Photothermal Camera, developed by the Non-Destructive Inspection Department at AREVA NP's Technical Center, is a device created to replace penetrant testing, a method whose drawbacks include environmental pollutants, industrial complexity and potential operator exposure. We have already seen how the Photothermal Camera can work alongside or instead of conventional surface inspection techniques such as penetrant, magnetic particle or eddy currents. With it, users can detect without any surface contact ligament defects or openings measuring just a few microns on rough oxidized, machined or welded metal parts. It also enables them to work on geometrically varied surfaces, hot parts or insulating (dielectric) materials without interference from the magnetic properties of the inspected part. The Photothermal Camera method has already been used for in situ inspections of tube/plate welds on an intermediate heat exchanger of the Phenix fast reactor. It also replaced the penetrant method for weld inspections on the ITER vacuum chamber, for weld crack detection on vessel head adapter J-welds, and for detecting cracks brought on by heat crazing. What sets this innovative method apart from others is its ability to operate at distances of up to two meters from the inspected part, as well as its remote control functionality at distances of up to 15 meters (or more via Ethernet), and its emissions-free environmental cleanliness. These make it a true alternative to penetrant testing, to the benefit of operator and environmental protection. (author) [French] La Camera Photothermique, developpee par le departement des Examens Non Destructifs du Centre Technique de AREVA NP, est un equipement destine a remplacer le ressuage, source de pollution pour l'environnement, de complexite pour l'industrialisation et eventuellement de dosimetrie pour les operateurs. Il a ete demontre que la Camera Photothermique peut etre utilisee en complement ou en remplacement des

  6. Detecting method of subjects' 3D positions and experimental advanced camera control system

    Science.gov (United States)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  7. Keyboard before Head Tracking Depresses User Success in Remote Camera Control

    Science.gov (United States)

    Zhu, Dingyun; Gedeon, Tom; Taylor, Ken

    In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.

  8. Effects of emotional videos on postural control in children.

    Science.gov (United States)

    Brandão, Arthur de Freitas; Palluel, Estelle; Olivier, Isabelle; Nougier, Vincent

    2016-03-01

    The link between emotions and postural control has been rather unexplored in children. The objective of the present study was to establish whether the projection of pleasant and unpleasant videos with similar arousal would lead to specific postural responses such as postural freezing, aversive or appetitive behaviours as a function of age. We hypothesized that postural sway would similarly increase with the viewing of high arousal videos in children and adults, whatever the emotional context. 40 children participated in the study and were divided into two groups of age: group 7-9 years (n=23; mean age=8 years ± 0.7) and group 10-12 years (n=17; mean age=11 years ± 0.7). 19 adults (mean age=25.8 years ± 4.4) also took part in the experiment. They viewed emotional videos while standing still on a force platform. Centre of foot pressure (CoP) displacements were analysed. Antero-posterior, medio-lateral mean speed and sway path length increased similarly with the viewing of high arousal movies in the younger, older children, and adults. Our findings suggest that the development of postural control is not influenced by the maturation of the emotional processing.

  9. Neural Network Method for Colorimetry Calibration of Video Cameras%基于神经网络的摄像机色度标定方法

    Institute of Scientific and Technical Information of China (English)

    周双全; 赵达尊

    2000-01-01

    To transfer the color data from a device (video camera)-dependent color space into a device-independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded as a nonlinear transformer realizing the mapping from the RGB color space to CIELAB color space. A variety of mapping accuracy were obtained with different network structures. BP neural networks can provide a satisfactory mapping accuracy in the field of color space transformation for video cameras.%研究将摄像机颜色数据从设备依赖色空间转化到与设备无关色空间. 利用BP神经网络的非线性映射特性,实现RGB色空间到CIELAB色空间的映射. 给出用不同的网络结构得到的映射精度. 对于摄像机色空间的颜色数据转换,用BP网络可以获得令人满意的转换精度.

  10. Car speed estimation based on cross-ratio using video data of car-mounted camera (black box).

    Science.gov (United States)

    Han, Inhwan

    2016-12-01

    This paper proposes several methods for using footages of car-mounted camera (car black box) to estimate the speed of the car with the camera, or the speed of other cars. This enables estimating car velocities directly from recorded footages without the need of specific physical locations of cars shown in the recorded material. To achieve this, this study collected 96 cases of black box footages and classified them for analysis based on various factors such as travel circumstances and directions. With these data, several case studies relating to speed estimation of camera-mounted car and other cars in recorded footage while the camera-mounted car is stationary, or moving, have been conducted. Additionally, a rough method for estimating the speed of other cars moving through a curvilinear path and its analysis results are described, for practical uses. Speed estimations made using cross-ratio were compared with the results of the traditional footage-analysis method and GPS calculation results for camera-mounted cars, proving its applicability.

  11. A source-based congestion control strategy for real-time video transport on IP network

    Science.gov (United States)

    Chen, Xia; Cai, Canhui

    2005-07-01

    The goal of this paper is to design a TCP friendly real-time video transport protocol that will not only utilize network resource efficiently, but also prevent network congestion from the real-time video transmitting effectively. To this end, we proposed a source based congestion control scheme to adapt video coding rate to the channel capacity of the IP network, including three stages: rate control, rate-adaptive video encoding, and rate shaping.

  12. A Prediction Method of TV Camera Image for Space Manual-control Rendezvous and Docking

    Science.gov (United States)

    Zhen, Huang; Qing, Yang; Wenrui, Wu

    Space manual-control rendezvous and docking (RVD) is a key technology for accomplishing the RVD mission in manned space engineering, especially when automatic control system is out of work. The pilot on chase spacecraft manipulates the hand-stick by the image of target spacecraft captured by TV camera. From the TV image, the relative position and attitude of chase and target spacecrafts can be shown. Therefore, the size, the position, the brightness and the shadow of the target on TV camera are key to guarantee the success of manual-control RVD. A method of predicting the on-orbit TV camera image at different relative positions and light conditions during the process of RVD is discussed. Firstly, the basic principle of capturing the image of cross drone on target spacecraft by TV camera is analyzed theoretically, based which the strategy of manual-control RVD is discussed in detail. Secondly, the relationship between the displayed size or position and the real relative distance of chase and target spacecrafts is presented, the brightness and reflection by the target spacecraft at different light conditions are decribed, the shadow on cross drone caused by the chase or target spacecraft is analyzed. Thirdly, a prediction method of on-orbit TV camera images at certain orbit and light condition is provided, and the characteristics of TV camera image during the RVD is analyzed. Finally, the size, the position, the brightness and the shadow of target spacecraft on TV camera image at typical orbit is simulated. The result, by comparing the simulated images with the real images captured by the TV camera on Shenzhou manned spaceship , shows that the prediction method is reasonable

  13. Control of Perceptual Image Quality Based on PID for Streaming Video

    Institute of Scientific and Technical Information of China (English)

    SONG Jian-xin

    2003-01-01

    Constant levels of perceptual quality of streaming video is what ideall users expect. In most cases, however, they receive time-varying levels of quality of video. In this paper, the author proposes a new control method of perceptual quality in variable bit rate video encoding for streaming video. The image quality calculation based on the perception of human visual systems is presented. Quantization properties of DCT coefficients are analyzed to control effectively. Quantization scale factors are ascertained based on the visual mask effect. A Proportional Integral Difference ( PID ) controller is used to control the image quality. Experimental results show that this method improves the perceptual quality uniformity of encoded video.

  14. Identification of Prey Captures in Australian Fur Seals (Arctocephalus pusillus doriferus Using Head-Mounted Accelerometers: Field Validation with Animal-Borne Video Cameras.

    Directory of Open Access Journals (Sweden)

    Beth L Volpov

    Full Text Available This study investigated prey captures in free-ranging adult female Australian fur seals (Arctocephalus pusillus doriferus using head-mounted 3-axis accelerometers and animal-borne video cameras. Acceleration data was used to identify individual attempted prey captures (APC, and video data were used to independently verify APC and prey types. Results demonstrated that head-mounted accelerometers could detect individual APC but were unable to distinguish among prey types (fish, cephalopod, stingray or between successful captures and unsuccessful capture attempts. Mean detection rate (true positive rate on individual animals in the testing subset ranged from 67-100%, and mean detection on the testing subset averaged across 4 animals ranged from 82-97%. Mean False positive (FP rate ranged from 15-67% individually in the testing subset, and 26-59% averaged across 4 animals. Surge and sway had significantly greater detection rates, but also conversely greater FP rates compared to heave. Video data also indicated that some head movements recorded by the accelerometers were unrelated to APC and that a peak in acceleration variance did not always equate to an individual prey item. The results of the present study indicate that head-mounted accelerometers provide a complementary tool for investigating foraging behaviour in pinnipeds, but that detection and FP correction factors need to be applied for reliable field application.

  15. Slew Maneuver Control for Spacecraft Equipped with Star Camera and Reaction Wheels

    DEFF Research Database (Denmark)

    Wisniewski, Rafal; Kulczycki, P.

    2005-01-01

    A configuration consisting of a star camera, four reaction wheels and magnetorquers for momentum unloading has become standard for many spacecraft missions. This popularity has motivated numerous agencies and private companies to initiate work on the design of an imbedded attitude control system...... realized on an integrated circuit. This paper provides an easily implementable control algorithm for this type of configuration. The paper considers two issues: slew maneuver with a feature of avoiding direct exposure of the camera's CCD chip to the Sun %, three-axis attitude control and optimal control...

  16. General Attitude Control Algorithm for Spacecraft Equipped with Star Camera and Reaction Wheels

    DEFF Research Database (Denmark)

    Wisniewski, Rafal; Kulczycki, P.

    A configuration consisting of a star camera, four reaction wheels and magnetorquers for momentum unloading has become standard for many spacecraft missions. This popularity has motivated numerous agencies and private companies to initiate work on the design of an imbedded attitude control system...... realized on an integrated circuit. This paper considers two issues: slew maneuver with a feature of avoiding direct exposure of the camera's CCD chip to the Sun %, three-axis attitude control and optimal control torque distribution in a reaction wheel assembly. The attitude controller is synthesized...

  17. Experimental research on thermoelectric cooler for imager camera thermal control

    Science.gov (United States)

    Hu, Bing-ting; Kang, Ao-feng; Fu, Xin; Jiang, Shi-chen; Dong, Yao-hai

    2013-09-01

    Conventional passive thermal design failed to satisfy CCD's temperature requirement on a geostationary earth orbit satellite Imager camera because of the high power and low working temperature, leading to utilization of thermoelectric cooler (TEC) for heat dissipation. TEC was used in conjunction with the external radiator in the CCDs' thermal design. In order to maintain the CCDs at low working temperature, experimental research on the performance of thermoelectric cooler was necessary and the results could be the guide for the application of TEC in different conditions. The experimental system to evaluate the performance of TEC was designed and built, consisting of TEC, heat pipe, TEC mounting plate, radiator and heater. A series of TEC performance tests were conducted for domestic and oversea TECs in thermal vacuum environment. The effects of TEC's mounting, input power and heat load on the temperature difference of TEC's cold and hot face were explored. Results demonstrated that the temperature difference of TEC's cold and hot face was slightly increased when TEC's operating voltage reached 80% of rating voltage, which caused the temperature rise of TEC's hot face. It recommended TEC to operate at low voltage. Based on experiment results, thermal analysis indicated that the temperature difference of TEC's cold and hot face could satisfy the temperature requirement and still had surplus.

  18. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  19. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Science.gov (United States)

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  20. What Does the Camera Communicate? An Inquiry into the Politics and Possibilities of Video Research on Learning

    Science.gov (United States)

    Vossoughi, Shirin; Escudé, Meg

    2016-01-01

    This piece explores the politics and possibilities of video research on learning in educational settings. The authors (a research-practice team) argue that changing the stance of inquiry from "surveillance" to "relationship" is an ongoing and contingent practice that involves pedagogical, political, and ethical choices on the…

  1. The Effect of Smartphone Video Camera as a Tool to Create Gigital Stories for English Learning Purposes

    Science.gov (United States)

    Gromik, Nicolas A.

    2015-01-01

    The integration of smartphones in the language learning environment is gaining research interest. However, using a smartphone to learn to speak spontaneously has received little attention. The emergence of smartphone technology and its video recording feature are recognised as suitable learning tools. This paper reports on a case study conducted…

  2. TransCAIP: A Live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters.

    Science.gov (United States)

    Taguchi, Yuichi; Koike, Takafumi; Takahashi, Keita; Naemura, Takeshi

    2009-01-01

    The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.

  3. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera

    Science.gov (United States)

    2004-01-01

    41), it is clear that the controller developed in the previous section can not be applied to solve the regulation problem . In this section, an...extension is presented to illustrate how a visual servo controller can be developed to solve the regulation problem for the fixed camera configuration. To

  4. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    Science.gov (United States)

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  5. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    Science.gov (United States)

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  6. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  7. Optimising camera traps for monitoring small mammals.

    Science.gov (United States)

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  8. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  9. Contraction behaviors of Vorticella sp. stalk investigated using high-speed video camera. II: Viscosity effect of several types of polymer additives.

    Science.gov (United States)

    Kamiguri, Junko; Tsuchiya, Noriko; Hidema, Ruri; Yatabe, Zenji; Shoji, Masahiko; Hashimoto, Chihiro; Pansu, Robert Bernard; Ushiki, Hideharu

    2012-01-01

    The contraction process of living Vorticella sp. in polymer solutions with various viscosities has been investigated by image processing using a high-speed video camera. The viscosity of the external fluid ranges from 1 to 5mPa·s for different polymer additives such as hydroxypropyl cellulose, polyethylene oxide, and Ficoll. The temporal change in the contraction length of Vorticella sp. in various macromolecular solutions is fitted well by a stretched exponential function based on the nucleation and growth model. The maximum speed of the contractile process monotonically decreases with an increase in the external viscosity, in accordance with power law behavior. The index values approximate to 0.5 and this suggests that the viscous energy dissipated by the contraction of Vorticella sp. is constant in a macromolecular environment.

  10. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  11. Adaptive Neural-Sliding Mode Control of Active Suspension System for Camera Stabilization

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-01-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to the unintentional vibrations caused by road roughness. This paper presents a novel adaptive neural network based on sliding mode control strategy to stabilize the image captured area of the camera. The purpose is to suppress vertical displacement of sprung mass with the application of active suspension system. Since the active suspension system has nonlinear and time varying characteristics, adaptive neural network (ANN is proposed to make the controller robustness against systematic uncertainties, which release the model-based requirement of the sliding model control, and the weighting matrix is adjusted online according to Lyapunov function. The control system consists of two loops. The outer loop is a position controller designed with sliding mode strategy, while the PID controller in the inner loop is to track the desired force. The closed loop stability and asymptotic convergence performance can be guaranteed on the basis of the Lyapunov stability theory. Finally, the simulation results show that the employed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  12. Uncooled radiometric camera performance

    Science.gov (United States)

    Meyer, Bill; Hoelter, T.

    1998-07-01

    Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.

  13. Model classification rate control algorithm for video coding

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A model classification rate control method for video coding is proposed. The macro-blocks are classified according to their prediction errors, and different parameters are used in the rate-quantization and distortion-quantization model.The different model parameters are calculated from the previous frame of the same type in the process of coding. These models are used to estimate the relations among rate, distortion and quantization of the current frame. Further steps,such as R-D optimization based quantization adjustment and smoothing of quantization of adjacent macroblocks, are used to improve the quality. The results of the experiments prove that the technique is effective and can be realized easily. The method presented in the paper can be a good way for MPEG and H. 264 rate control.

  14. Optimal Rate Control in H.264 Video Coding Based on Video Quality Metric

    Directory of Open Access Journals (Sweden)

    R. Karthikeyan

    2014-05-01

    Full Text Available The aim of this research is to find a method for providing better visual quality across the complete video sequence in H.264 video coding standard. H.264 video coding standard with its significantly improved coding efficiency finds important applications in various digital video streaming, storage and broadcast. To achieve comparable quality across the complete video sequence with the constrains on bandwidth availability and buffer fullness, it is important to allocate more bits to frames with high complexity or a scene change and fewer bits to other less complex frames. A frame layer bit allocation scheme is proposed based on the perceptual quality metric as indicator of the frame complexity. The proposed model computes the Quality Index ratio (QIr of the predicted quality index of the current frame to the average quality index of all the previous frames in the group of pictures which is used for bit allocation to the current frame along with bits computed based on buffer availability. The standard deviation of the perceptual quality indicator MOS computed for the proposed model is significantly less which means the quality of the video sequence is identical throughout the full video sequence. Thus the experiment results shows that the proposed model effectively handles the scene changes and scenes with high motion for better visual quality.

  15. Video Enhancement and Dynamic Range Control of HDR Sequences for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Giovanni Ramponi

    2007-01-01

    Full Text Available CMOS video cameras with high dynamic range (HDR output are particularly suitable for driving assistance applications, where lighting conditions can strongly vary, going from direct sunlight to dark areas in tunnels. However, common visualization devices can only handle a low dynamic range, and thus a dynamic range reduction is needed. Many algorithms have been proposed in the literature to reduce the dynamic range of still pictures. Anyway, extending the available methods to video is not straightforward, due to the peculiar nature of video data. We propose an algorithm for both reducing the dynamic range of video sequences and enhancing its appearance, thus improving visual quality and reducing temporal artifacts. We also provide an optimized version of our algorithm for a viable hardware implementation on an FPGA. The feasibility of this implementation is demonstrated by means of a case study.

  16. Adaptive efficient video transmission over the Internet based on congestion control and RS coding

    Institute of Scientific and Technical Information of China (English)

    黄伟红; 张福炎; 孙正兴

    2002-01-01

    An approach based on adaptive congestion control and adaptive error recovery with RS (Reed-Solomon) coding method is presented for efficient video transmission over the Internet.Featured by weighted moving average rate control and TCP-fdendliness,AVSP,a novel adaptive video streaming protocol,is designed with adjustable rate control parameters so as to respond quickly to the QoS status fluctuation during video transmission over the Internet.Combined with congestion control policy,an adaptive RS coding error recovery scheme with variable parameters is presented to enhance the robustness of MPEG video transmission over the Intemet with restriction to the total system bandwidth.``

  17. An Affordable Wireless Network Video Camera%经济型无线网络摄像机

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

      小型商店,精品店,办公室以及住宅用户都在寻找一种容易安装且价格实惠的网络摄像机。经济适用的小型 AXIS M1004-W网络摄像机,带有无线功能以及安讯士独特的一键摄像机连接功能(One-Click Camera Connec-tion),非常适用于这些小型场所的监控需求。

  18. Camera-Based Control for Industrial Robots Using OpenCV Libraries

    Science.gov (United States)

    Seidel, Patrick A.; Böhnke, Kay

    This paper describes a control system for industrial robots whose reactions base on the analysis of images provided by a camera mounted on top of the robot. We show that such control system can be designed and implemented with an open source image processing library and cheap hardware. Using one specific robot as an example, we demonstrate the structure of a possible control algorithm running on a PC and its interaction with the robot.

  19. Camtracker: a new camera controlled high precision solar tracker system for FTIR-spectrometers

    Directory of Open Access Journals (Sweden)

    M. Gisi

    2011-01-01

    Full Text Available A new system to very precisely couple radiation of a moving source into a Fourier Transform Infrared (FTIR Spectrometer is presented. The Camtracker consists of a homemade altazimuthal solar tracker, a digital camera and a homemade program to process the camera data and to control the motion of the tracker. The key idea is to evaluate the image of the radiation source on the entrance field stop of the spectrometer. We prove that the system reaches tracking accuracies of about 10 arc s for a ground-based solar absorption FTIR spectrometer, which is significantly better than current solar trackers. Moreover, due to the incorporation of a camera, the new system allows to document residual pointing errors and to point onto the solar disk center even in case of variable intensity distributions across the source due to cirrus or haze.

  20. Camtracker: a new camera controlled high precision solar tracker system for FTIR-spectrometers

    Directory of Open Access Journals (Sweden)

    M. Gisi

    2010-11-01

    Full Text Available A new system to very precisely couple radiation of a moving source into a FTIR-spectrometer is presented. The Camtracker consists of a homemade altazimuthal solar tracker, a digital camera and a homemade program to process the camera data and to control the motion of the tracker. The key idea is to evaluate the image of the radiation source on the entrance field stop of the spectrometer. We prove that the system reaches tracking accuracies of about 10" for a ground-based solar absorption FTIR spectrometer, which is significantly better than current solar trackers. Moreover, due to the incorporation of a camera, the new system allows to document residual pointing errors and to point onto the solar disc centre even in case of variable intensity distributions across the source due to cirrus or haze.

  1. Improved control of exogenous attention in action video game players

    Directory of Open Access Journals (Sweden)

    Matthew S Cain

    2014-02-01

    Full Text Available Action video game players have demonstrated a number of attentional advantages over non-players. Here, we propose that many of those benefits might be underpinned by improved control over exogenous (i.e., stimulus-driven attention. To test this we used an anti-cuing task, in which a sudden-onset cue indicated that the target would likely appear in a separate location on the opposite side of the fixation point. When the time between the cue onset and the target onset was short (40 ms, non-players (nVGPs showed a typical exogenous attention effect. Their response times were faster to targets presented at the cued (but less probable location compared with the opposite (more probable location. Video game players (VGPs, however, were less likely to have their attention drawn to the location of the cue. When the onset asynchrony was long (600 ms, VGPs and nVGPs were equally able to endogenously shift their attention to the likely (opposite target location. In order to rule out processing-speed differences as an explanation for this result, we also tested VGPs and nVGPs on an attentional blink task. In a version of the attentional blink task that minimized demands on task switching and iconic memory, VGPs and nVGPs did not differ in second target identification performance (i.e., VGPs had the same magnitude of attentional blink as nVGPs, suggesting that the anti-cuing results were due to flexible control over exogenous attention rather than to more general speed-of-processing differences.

  2. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  3. Reactive control of zoom while fixating using perspective and affine cameras.

    Science.gov (United States)

    Tordoff, Ben; Murray, David

    2004-01-01

    This paper describes reactive visual methods of controlling the zoom setting of the lens of an active camera while fixating upon an object. The first method assumes a perspective projection and adjusts zoom to preserve the ratio of focal length to scene depth. The active camera is constrained to rotate, permitting self-calibration from the image motion of points on the static background. A planar structure from motion algorithm is used to recover the depth of the foreground. The foreground-background segmentation exploits the properties of the two different interimage homographies which are observed. The fixation point is updated by transfer via the observed planar structure. The planar method is shown to work on real imagery, but results from simulated data suggest that its extension to general 3D structure is problematical under realistic viewing and noise regimes. The second method assumes an affine projection. It requires no self-calibration and the zooming camera may move generally. Fixation is again updated using transfer, but now via the affine structure recovered by factorization. Analysis of the projection matrices allows the relative scale of the affine bases in different views to be found in a number of ways and, hence, controlled to unity. The various ways are compared and the best used on real imagery captured from an active camera fitted with a controllable zoom lens in both look-move and continuous operation.

  4. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Wu, Shengli, E-mail: slwu@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Tian, Jinshou [Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Liu, Zhen [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Fang, Yuman [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Gao, Guilong; Liang, Lingliang [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Wen, Wenlong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-11-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility.

  5. Calculating video meteor positions in a narrow-angle field with AIP4Win software - Comparison with the positions obtained by SPOSH cameras in a wide-angle field

    Science.gov (United States)

    Tsamis, Vagelis; Margonis, Anastasios; Christou, Apostolos

    2013-01-01

    We present an alternative way to calculate the positions of meteors captured in a narrow video field with a Watec camera and a 28 mm aspherical lens (FOV 11 degrees) by using Astronomical Image Processing for Windows, V2, a classic astrometry and photometry software. We have calculated positions for two Perseid meteors in Lyra which were recorded in August 2010, at Mt. Parnon, Greece. We then compare our astrometry position results with the results obtained by SPOSH cameras (FOV 120 degrees) for the same meteors.

  6. Video-Assisted Informed Consent for Cataract Surgery: A Randomized Controlled Trial

    Science.gov (United States)

    Ruan, Xiangcai; Tang, Haoying; Yang, Weizhong; Xian, Zhuanhua; Lu, Min

    2017-01-01

    Purpose. To investigate whether adding video assistance to traditional verbal informed consent advisement improved satisfaction among cataract surgery patients. Methods. This trial enrolled 80 Chinese patients with age-related cataracts scheduled to undergo unilateral phacoemulsification surgery. Patients were randomized into two groups: the video group watched video explaining cataract-related consent information and rewatched specific segments of the video at their own discretion, before receiving traditional verbal consent advisement; the control group did not watch the video. Outcomes included patient satisfaction, refusal to consent, time to complete the consent process, and comprehension measured by a ten-item questionnaire. Results. All 80 enrolled patients signed informed consent forms. Compared with the control group, members of the video group exhibited greater satisfaction (65% versus 86%, p = 0.035) and required less time to complete the consent process (12.3 ± 6.7 min versus 5.6 ± 5.4 min, p < 0.001), while also evincing levels of comprehension commensurate with those reported for patients who did not watch the video (accuracy rate, 77.5% versus 80.2%, p = 0.386). Conclusion. The video-assisted informed consent process had a positive impact on patients' cataract surgery experiences. Additional research is needed to optimize patients' comprehension of the video. PMID:28191349

  7. Active Video Game Exercise Training Improves the Clinical Control of Asthma in Children: Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Evelim L F D Gomes

    Full Text Available The aim of the present study was to determine whether aerobic exercise involving an active video game system improved asthma control, airway inflammation and exercise capacity in children with moderate to severe asthma.A randomized, controlled, single-blinded clinical trial was carried out. Thirty-six children with moderate to severe asthma were randomly allocated to either a video game group (VGG; N = 20 or a treadmill group (TG; n = 16. Both groups completed an eight-week supervised program with two weekly 40-minute sessions. Pre-training and post-training evaluations involved the Asthma Control Questionnaire, exhaled nitric oxide levels (FeNO, maximum exercise testing (Bruce protocol and lung function.No differences between the VGG and TG were found at the baseline. Improvements occurred in both groups with regard to asthma control and exercise capacity. Moreover, a significant reduction in FeNO was found in the VGG (p < 0.05. Although the mean energy expenditure at rest and during exercise training was similar for both groups, the maximum energy expenditure was higher in the VGG.The present findings strongly suggest that aerobic training promoted by an active video game had a positive impact on children with asthma in terms of clinical control, improvement in their exercise capacity and a reduction in pulmonary inflammation.Clinicaltrials.gov NCT01438294.

  8. “For Your Safety” Effects of Camera Surveillance on Safety Impressions, Situation Construal and Attributed Intent

    OpenAIRE

    Rompay, van, Thomas; De, Vries, J.A.; Damink, Manon T.; MacTavish, Thomas; Basapur, Santosh

    2015-01-01

    Based on the assumption that monitoring technology in environmental settings impacts people’s state of mind and subsequent perceptions, the current study examines the influence of security camera’s on safety perceptions and citizen wellbeing. Participants watched a video of city streets that featured (versus not featured) security cameras. In the camera condition, their safety ratings were significantly higher than in the camera-less (control) condition. In addition, the camera condition caus...

  9. Opto-mechanical design of the G-CLEF flexure control camera system

    Science.gov (United States)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  10. Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera.

    Science.gov (United States)

    Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Chang-Kun; Lee, Byoungho

    2015-12-10

    In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods.

  11. Automatic Camera Control System for a Distant Lecture with Videoing a Normal Classroom.

    Science.gov (United States)

    Suganuma, Akira; Nishigori, Shuichiro

    The growth of a communication network technology enables students to take part in a distant lecture. Although many lectures are conducted in universities by using Web contents, normal lectures using a blackboard are still held. The latter style lecture is good for a teacher's dynamic explanation. A way to modify it for a distant lecture is to…

  12. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy

    Science.gov (United States)

    Barabas, Federico M.; Masullo, Luciano A.; Stefani, Fernando D.

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  13. Indoor SLAM Using Laser and Camera with Closed-Loop Controller for NAO Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Shuhuan Wen

    2014-01-01

    Full Text Available We present a SLAM with closed-loop controller method for navigation of NAO humanoid robot from Aldebaran. The method is based on the integration of laser and vision system. The camera is used to recognize the landmarks whereas the laser provides the information for simultaneous localization and mapping (SLAM . K-means clustering method is implemented to extract data from different objects. In addition, the robot avoids the obstacles by the avoidance function. The closed-loop controller reduces the error between the real position and estimated position. Finally, simulation and experiments show that the proposed method is efficient and reliable for navigation in indoor environments.

  14. CCD video camera system in the assessment of balance function Ye Shuangying%使用CCD摄像仪检测系统测定老年人体平衡能力的敏感性分析

    Institute of Scientific and Technical Information of China (English)

    叶双樱; 吴绍长; 潘君玲; 江依法; 周青

    2014-01-01

    目的 分析使用CCD摄像仪检测系统测定老年人平衡能力的敏感性,从而探讨其测定人体平衡功能的可能性. 方法 采用CCD摄像仪检测系统由CCD摄像仪.观察指标主要有:身体动摇角度(TSA)、身体动摇速度(TSS)和跌倒指数(FI).根据FI分为对照组和平衡功能障碍轻度、中度、重度组.平衡功能障碍组为丽水市第二人民医院老年科收治的脑血管意外患者42例,平均年龄(67.4±8.0)岁.对照组为健康体检志愿者42例,平均年龄(65.3±6.5)岁.所有参与者均Berg平衡量表评分,CCD摄像仪的测定TSA、TSS、FI,使用CCD摄像仪每人开眼(eo)检测3次,闭眼(ec)检测3次,取平均值. 结果 平衡功能障碍组与对照组相比,TSAeo、TSAec、TSSeo、TSSec、FIeo、FIec差异有统计学意义(P<0.01);Berg平衡量表评分差异有统计学意义(P<0.05).平衡功能障碍组中TSAeo、TSAec、TSSeo、TSSec轻度、中度、重度间均差异有统计学意义(P<0.05),重度与轻度间TSAeo、TSAec、TSSeo、TSSec有显著(P<0.01).Berg平衡量表评分重度与轻度间有差异(P<0.05). 结论 测定人体平衡功能CCD摄像仪检测系统敏感性较高,较Berg平衡量表CCD摄像仪具有更高的敏感度,值得临床推广.%Objective To probe into possibilities and sensitivity of CCD video camera system in the assessment of balance function.Methods The system is composed of a CCD video camera,a video image capture board and analysis software.Three indexes:Trunks,ayAngle (TSS Trunk Sway Speed(TSS) and fall index (FI).the normal group:FI≥I; mild group:0.7≤FI<I; moderate group:0.4≤FI<0.7; severe group:FI <0.4.Disorder of balance function group:42 patients with cerebrovascular accident,mean age of (67.4±8.0) years.Control group:42 healthy subjects,mean age of (65.3±6.5) years.All cases were measured by Berg balance scale and determinated three times by CCD video camera system with eye opening (eo) and eye closing

  15. Indirect iterative learning control for a discrete visual servo without a camera-robot model.

    Science.gov (United States)

    Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan

    2007-08-01

    This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.

  16. Efficient rate control scheme for low bit rate H.264/AVC video coding

    Institute of Scientific and Technical Information of China (English)

    LI Zhi-cheng; ZHANG Yong-jun; LIU Tao; GU Wan-yi

    2009-01-01

    This article presents an efficient rate control scheme for H.264/AVC video coding in low bit rate environment. In the proposed scheme, an improved rate-distortion (RD) model by both analytical and empirical approaches is developed. It involves an enhanced mean absolute difference estimating method and a more rate-robust distortion model. Based on this RD model, an efficient macroblock-layer rate control scheme for H.264/AVC video coding is proposed. Experimental results show that this model encodes video sequences with higher peak signal-to-noise ratio gains and generates bit stream closer to the target rate.

  17. 76 FR 75911 - Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings

    Science.gov (United States)

    2011-12-05

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. Section 337 of the Tariff Act of...

  18. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department.

    Science.gov (United States)

    Mathers, Sandra A; Anderson, Helen; McDonald, Sheila; Chesson, Rosemary A

    2010-03-01

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be extremely time-consuming. This was despite the modest

  19. Relationship analysis between transient thermal control mode and image quality for an aerial camera.

    Science.gov (United States)

    Liu, Weiyi; Xu, Yongsen; Yao, Yuan; Xu, Yulei; Shen, Honghai; Ding, Yalin

    2017-02-01

    Thermal control and temperature uniformity are important factors for aerial cameras. This paper describes the problems with existing systems and introduces modifications. The modifications have improved the temperature uniformity from 12.8°C to 4.5°C, and they enable images to be obtained at atmospheric and low pressures (35.4 KPa). First, thermal optical analysis of the camera is performed by using the finite element analysis method. This modeled the effect of temperature level and temperature gradient on imaging. Based on the results of the analysis, the corresponding improvements to the thermal control measures are implemented to improve the temperature uniformity. The relationship between the temperature control mode and temperature uniformity is analyzed. The improved temperature field corresponding to the thermal optical analysis is studied. Taking into account that the convection will be affected by the low pressure, the paper analyzes the thermal control effect, and imaging results are obtained in low pressure. The experimental results corroborate the analyses.

  20. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    OpenAIRE

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P. T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short conf...

  1. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  2. Video game addiction in emerging adulthood: Cross-sectional evidence of pathology in video game addicts as compared to matched healthy controls.

    Science.gov (United States)

    Stockdale, Laura; Coyne, Sarah M

    2018-01-01

    The Internet Gaming Disorder Scale (IGDS) is a widely used measure of video game addiction, a pathology affecting a small percentage of all people who play video games. Emerging adult males are significantly more likely to be video game addicts. Few researchers have examined how people who qualify as video game addicts based on the IGDS compared to matched controls based on age, gender, race, and marital status. The current study compared IGDS video game addicts to matched non-addicts in terms of their mental, physical, social-emotional health using self-report, survey methods. Addicts had poorer mental health and cognitive functioning including poorer impulse control and ADHD symptoms compared to controls. Additionally, addicts displayed increased emotional difficulties including increased depression and anxiety, felt more socially isolated, and were more likely to display internet pornography pathological use symptoms. Female video game addicts were at unique risk for negative outcomes. The sample for this study was undergraduate college students and self-report measures were used. Participants who met the IGDS criteria for video game addiction displayed poorer emotional, physical, mental, and social health, adding to the growing evidence that video game addictions are a valid phenomenon. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Fire Extinguisher Robot Using Ultrasonic Camera and Wi-Fi Network Controlled with Android Smartphone

    Science.gov (United States)

    Siregar, B.; Purba, H. A.; Efendi, S.; Fahmi, F.

    2017-03-01

    Fire disasters can occur anytime and result in high losses. It is often that fire fighters cannot access the source of fire due to the damage of building and very high temperature, or even due to the presence of explosive materials. With such constraints and high risk in the handling of the fire, a technological breakthrough that can help fighting the fire is necessary. Our paper proposed the use of robots to extinguish the fire that can be controlled from a specified distance in order to reduce the risk. A fire extinguisher robot was assembled with the intention to extinguish the fire by using a water pump as actuators. The robot movement was controlled using Android smartphones via Wi-fi networks utilizing Wi-fi module contained in the robot. User commands were sent to the microcontroller on the robot and then translated into robotic movement. We used ATMega8 as main microcontroller in the robot. The robot was equipped with cameras and ultrasonic sensors. The camera played role in giving feedback to user and in finding the source of fire. Ultrasonic sensors were used to avoid collisions during movement. Feedback provided by camera on the robot displayed on a screen of smartphone. In lab, testing environment the robot can move following the user command such as turn right, turn left, forward and backward. The ultrasonic sensors worked well that the robot can be stopped at a distance of less than 15 cm. In the fire test, the robot can perform the task properly to extinguish the fire.

  4. Optimizing process time of laser drilling processes in solar cell manufacturing by coaxial camera control

    Science.gov (United States)

    Jetter, Volker; Gutscher, Simon; Blug, Andreas; Knorz, Annerose; Ahrbeck, Christopher; Nekarda, Jan; Carl, Daniel

    2014-03-01

    In emitter wrap through (EWT) solar cells, laser drilling is used to increase the light sensitive area by removing emitter contacts from the front side of the cell. For a cell area of 156 x 156 mm2, about 24000 via-holes with a diameter of 60 μm have to be drilled into silicon wafers with a thickness of 200 μm. The processing time of 10 to 20 s is determined by the number of laser pulses required for safely opening every hole on the bottom side. Therefore, the largest wafer thickness occurring in a production line defines the processing time. However, wafer thickness varies by roughly +/-20 %. To reduce the processing time, a coaxial camera control system was integrated into the laser scanner. It observes the bottom breakthrough from the front side of the wafer by measuring the process emissions of every single laser pulse. To achieve the frame rates and latency times required by the repetition rate of the laser (10 kHz), a camera based on cellular neural networks (CNN) was used where the images are processed directly on the camera chip by 176 x 144 sensor-processor-elements. One image per laser pulse is processed within 36 μs corresponding to a maximum pulse rate of 25 kHz. The laser is stopped when all of the holes are open on the bottom side. The result is a quality control system in which the processing time of a production line is defined by average instead of maximum wafer thickness.

  5. 15 CFR 744.9 - Restrictions on certain exports and reexports of cameras controlled by ECCN 6A003.b.4.b.

    Science.gov (United States)

    2010-01-01

    ... reexports of cameras controlled by ECCN 6A003.b.4.b. 744.9 Section 744.9 Commerce and Foreign Trade... on certain exports and reexports of cameras controlled by ECCN 6A003.b.4.b. (a) General prohibitions... license is required to export or reexport to any destination other than Canada cameras described in...

  6. Realization of the ergonomics design and automatic control of the fundus cameras

    Science.gov (United States)

    Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye

    2012-12-01

    The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.

  7. Control software and user interface for the Canarias Infrared Camera Experiment (CIRCE)

    Science.gov (United States)

    Marín-Franch, Antonio; Eikenberry, Stephen S.; Charcos-Llorens, Miguel V.; Edwards, Michelle L.; Varosi, Frank; Hon, David B.; Raines, Steven N.; Warner, Craig D.; Rashkin, David

    2006-06-01

    The Canarias InfraRed Camera Experiment (CIRCE) is a near-infrared visitor instrument for the 10.4-meter Gran Telescopio Canarias (GTC). This document shows CIRCE software. It will have two major functions: instrument control and observatory interface. The instrument control software is based on the UFLIB library, currently used to operate FLAMINGOS-1 and T-ReCS (as well as the CanariCam and FLAMINGOS-2 instruments under development in the University of Florida). The software interface with the telescope will be based on a CORBA server-client architecture. Finally, the user interface will consist of two java-based interfaces for the mechanism/detector control, and for quick look and analysis of data.

  8. Video doorphone

    OpenAIRE

    Horyna, Miroslav

    2015-01-01

    Tato diplomová práce se zabývá návrhem dveřního video telefonu na platformě Raspberry Pi. Je zde popsána platforma Raspberry Pi, modul Raspberry Pi Camera, operační systémy pro Raspberry Pi a popis instalace a nastavení softwaru. Dále je zde popsán návrh a popis programů vytvořených pro dveřní video telefon a návrh přídavných modulů. This thesis deals with door video phone on the platform Raspberry Pi. There is described the platform Raspberry Pi, Raspberry Pi Camera module, operating syst...

  9. Trajectory association across multiple airborne cameras.

    Science.gov (United States)

    Sheikh, Yaser Ajmal; Shah, Mubarak

    2008-02-01

    A camera mounted on an aerial vehicle provides an excellent means for monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. In this paper, we address the problem of associating objects across multiple airborne cameras. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple association hypotheses, without assuming any prior calibration information. Given our scene model, we propose a likelihood function for evaluating a hypothesized association between observations in multiple cameras that is geometrically motivated. Since multiple cameras exist, ensuring coherency in association is an essential requirement, e.g. that transitive closure is maintained between more than two cameras. To ensure such coherency we pose the problem of maximizing the likelihood function as a k-dimensional matching and use an approximation to find the optimal assignment of association. Using the proposed error function, canonical trajectories of each object and optimal estimates of inter-camera transformations (in a maximum likelihood sense) are computed. Finally, we show that as a result of associating objects across the cameras, a concurrent visualization of multiple aerial video streams is possible and that, under special conditions, trajectories interrupted due to occlusion or missing detections can be repaired. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models, and through simulation quantitative performance is also reported.

  10. TOUCHSCREEN USING WEB CAMERA

    Directory of Open Access Journals (Sweden)

    Kuntal B. Adak

    2015-10-01

    Full Text Available In this paper we present a web camera based touchscreen system which uses a simple technique to detect and locate finger. We have used a camera and regular screen to achieve our goal. By capturing the video and calculating position of finger on the screen, we can determine the touch position and do some function on that location. Our method is very easy and simple to implement. Even our system requirement is less expensive compare to other techniques.

  11. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  12. Music and Video as Distractors for Boys with ADHD in the Classroom: Comparison with Controls, Individual Differences, and Medication Effects

    Science.gov (United States)

    Pelham, William E.; Waschbusch, Daniel A.; Hoza, Betsy; Gnagy, Elizabeth M.; Greiner, Andrew R.; Sams, Susan E.; Vallano, Gary; Majumdar, Antara; Carter, Randy L.

    2011-01-01

    This study examined the effects of music and video on the classroom behavior and performance of boys with and without attention deficit hyperactivity disorder (ADHD) and examined the effects of 0.3 mg/kg methylphenidate (MPH). In one study, 41 boys with ADHD and 26 controls worked in the presence of no distractor, music, or video. Video produced…

  13. Music and Video as Distractors for Boys with ADHD in the Classroom: Comparison with Controls, Individual Differences, and Medication Effects

    Science.gov (United States)

    Pelham, William E.; Waschbusch, Daniel A.; Hoza, Betsy; Gnagy, Elizabeth M.; Greiner, Andrew R.; Sams, Susan E.; Vallano, Gary; Majumdar, Antara; Carter, Randy L.

    2011-01-01

    This study examined the effects of music and video on the classroom behavior and performance of boys with and without attention deficit hyperactivity disorder (ADHD) and examined the effects of 0.3 mg/kg methylphenidate (MPH). In one study, 41 boys with ADHD and 26 controls worked in the presence of no distractor, music, or video. Video produced…

  14. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  15. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  16. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that takes live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of currently capability.

  17. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that takes live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of currently capability.

  18. Design of a control system for ultrafast x-ray camera working in a single photon counting mode

    Science.gov (United States)

    Zoladz, Miroslaw; Rauza, Jacek; Kasinski, Krzysztof; Maj, Piotr; Grybos, Pawel

    2015-09-01

    Prototype of Ultra-Fast X-Ray Camera Controller working in a single photon counting mode and based on ASIC has been presented in this paper. An ASIC architecture has been discussed with special attention to digital part. We present the Custom Soft Processor as an ASIC control sequences generator. The Processor allows for dynamic program downloading and generating control sequences with up to 80MHz clock rate (preliminary results). Assembler with a very simple syntax has been defined to speed up Processor programs development. Discriminators threshold dispersion correction has been performed to confirm proper Camera Controller operation.

  19. Semi-automated sorting using holographic optical tweezers remotely controlled by eye/hand tracking camera

    Science.gov (United States)

    Tomori, Zoltan; Keša, Peter; Nikorovič, Matej; Kaůka, Jan; Zemánek, Pavel

    2016-12-01

    We proposed the improved control software for the holographic optical tweezers (HOT) proper for simple semi-automated sorting. The controller receives data from both the human interface sensors and the HOT microscope camera and processes them. As a result, the new positions of active laser traps are calculated, packed into the network format and sent to the remote HOT. Using the photo-polymerization technique, we created a sorting container consisting of two parallel horizontal walls where one wall contains "gates" representing a place where the trapped particle enters into the container. The positions of particles and gates are obtained by image analysis technique which can be exploited to achieve the higher level of automation. Sorting is documented on computer game simulation and the real experiment.

  20. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department

    Energy Technology Data Exchange (ETDEWEB)

    Mathers, Sandra A. [Aberdeen Royal Infirmary, Department of Radiology, Aberdeen (United Kingdom); The Robert Gordon University, Faculty of Health and Social Care, Aberdeen (United Kingdom); Anderson, Helen [Royal Aberdeen Children' s Hospital, Department of Radiology, Aberdeen (United Kingdom); McDonald, Sheila [Royal Aberdeen Children' s Hospital, Aberdeen (United Kingdom); Chesson, Rosemary A. [University of Aberdeen, School of Medicine and Dentistry, Aberdeen (United Kingdom)

    2010-03-15

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be

  1. Joystick-controlled video console game practice for developing power wheelchairs users' indoor driving skills.

    Science.gov (United States)

    Huang, Wei Pin; Wang, Chia Cheng; Hung, Jo Hua; Chien, Kai Chun; Liu, Wen-Yu; Cheng, Chih-Hsiu; Ng, How-Hing; Lin, Yang-Hua

    2015-02-01

    [Purpose] This study aimed to determine the effectiveness of joystick-controlled video console games in enhancing subjects' ability to control power wheelchairs. [Subjects and Methods] Twenty healthy young adults without prior experience of driving power wheelchairs were recruited. Four commercially available video games were used as training programs to practice joystick control in catching falling objects, crossing a river, tracing the route while floating on a river, and navigating through a garden maze. An indoor power wheelchair driving test, including straight lines, and right and left turns, was completed before and after the video game practice, during which electromyographic signals of the upper limbs were recorded. The paired t-test was used to compare the differences in driving performance and muscle activities before and after the intervention. [Results] Following the video game intervention, participants took significantly less time to complete the course, with less lateral deviation when turning the indoor power wheelchair. However, muscle activation in the upper limbs was not significantly affected. [Conclusion] This study demonstrates the feasibility of using joystick-controlled commercial video games to train individuals in the control of indoor power wheelchairs.

  2. Performance of Watec 910 HX camera for meteor observing

    Science.gov (United States)

    Ocaña, Francisco; Zamorano, Jaime; Tapia Ayuga, Carlos E.

    2014-01-01

    The new Watec 910 HX model is a 0.5 MPix multipurpose video camera with up to ×256 frames integration capability. We present a sensitivity and spectral characterization done at Universidad Complutense de Madrid Instrument Laboratory (LICA). In addition, we have carried out a field test to show the performance of this camera for meteor observing. With respect to the similar model 902 H2 Ultimate, the new camera has additional set-up controls that are important for the scientific use of the recordings. However the overall performance does not justify the extra cost for most of the meteor observers.

  3. 高校运用有限摄编媒体资源实现多机位摄制高质量教学影片探析%On Manufacturing High -quality Instructional Videos with Multi- camera by Limited Video Editing Resources in University

    Institute of Scientific and Technical Information of China (English)

    张锐; 黄燕

    2012-01-01

    摄制高质量的教学影片是各高校追求的重要目标,部分题材的影片摄制要实现高质量,必须以多台摄影机、多景别、多视角拍摄来实现,然而,部分高校由于经费的原因,不具备这样的条件,这是个矛盾。本文以摄、编、导等多个环节较为详细的阐述了运用有限的摄编媒体资源来实现多机位(15机位)摄制高质量教学影片的思路、方法与技巧,并以实例佐证,以求与同行们切磋。%Manufacturing high - quality instructional videos is an important goal by all colleges and universities, and to achieve high quality video, some subject matter of the film must be based on multi- cameras, multi- scenes, multi -angle shooting. However, owing to financial reasons some colleges and universities can not have these conditions. This is a contradiction. This paper expounds ideas, methods and techniques of manufacturing high - quality instructional videos with multi - camera ( 15 angles) by limited video editing resources.

  4. Do Instructional Videos on Sputum Submission Result in Increased Tuberculosis Case Detection? A Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Grace Mhalu

    Full Text Available We examined the effect of an instructional video about the production of diagnostic sputum on case detection of tuberculosis (TB, and evaluated the acceptance of the video.Randomized controlled trial.We prepared a culturally adapted instructional video for sputum submission. We analyzed 200 presumptive TB cases coughing for more than two weeks who attended the outpatient department of the governmental Municipal Hospital in Mwananyamala (Dar es Salaam, Tanzania. They were randomly assigned to either receive instructions on sputum submission using the video before submission (intervention group, n = 100 or standard of care (control group, n = 100. Sputum samples were examined for volume, quality and presence of acid-fast bacilli by experienced laboratory technicians blinded to study groups.Median age was 39.1 years (interquartile range 37.0-50.0; 94 (47% were females, 106 (53% were males, and 49 (24.5% were HIV-infected. We found that the instructional video intervention was associated with detection of a higher proportion of microscopically confirmed cases (56%, 95% confidence interval [95% CI] 45.7-65.9%, sputum smear positive patients in the intervention group versus 23%, 95% CI 15.2-32.5%, in the control group, p <0.0001, an increase in volume of specimen defined as a volume ≥3ml (78%, 95% CI 68.6-85.7%, versus 45%, 95% CI 35.0-55.3%, p <0.0001, and specimens less likely to be salivary (14%, 95% CI 7.9-22.4%, versus 39%, 95% CI 29.4-49.3%, p = 0.0001. Older age, but not the HIV status or sex, modified the effectiveness of the intervention by improving it positively. When asked how well the video instructions were understood, the majority of patients in the intervention group reported to have understood the video instructions well (97%. Most of the patients thought the video would be useful in the cultural setting of Tanzania (92%.Sputum submission instructional videos increased the yield of tuberculosis cases through better quality of sputum

  5. Obstacle Avoidance Control for Mobile Robot Based on Single CCD Camera and Ultrasonic Sensors

    Science.gov (United States)

    Nara, Shunsuke; Takahashi, Satoru

    This paper proposes an available method of obstacle avoidance control by using a single CCD camera and ultrasonic sensors mounted on the mobile robot. First, depending on the change of the brightness on the image that occurs from the moving of the mobile robot, we calculate the optical flow by the block matching method based on the normalized correlation and detect the obstacle area on the image. Further, in order to reduce the error of the detection area, by combining the distance information obtained by ultrasonic sensors on the image shown the obstacle area we decide the position of obstacle with high accuracy. Dealing with the position information, we make the reference points for generating the trajectory of the mobile robot. This trajectory is smooth and is generated by minimizing a certain cost function. Then, the mobile robot moves according to the trajectory while avoiding around the obstacle. Finally, usefulness of our proposed method is shown through some experiments.

  6. Numerical simulations and analyses of temperature control loop heat pipe for space CCD camera

    Science.gov (United States)

    Meng, Qingliang; Yang, Tao; Li, Chunlin

    2016-10-01

    As one of the key units of space CCD camera, the temperature range and stability of CCD components affect the image's indexes. Reasonable thermal design and robust thermal control devices are needed. One kind of temperature control loop heat pipe (TCLHP) is designed, which highly meets the thermal control requirements of CCD components. In order to study the dynamic behaviors of heat and mass transfer of TCLHP, particularly in the orbital flight case, a transient numerical model is developed by using the well-established empirical correlations for flow models within three dimensional thermal modeling. The temperature control principle and details of mathematical model are presented. The model is used to study operating state, flow and heat characteristics based upon the analyses of variations of temperature, pressure and quality under different operating modes and external heat flux variations. The results indicate that TCLHP can satisfy the thermal control requirements of CCD components well, and always ensure good temperature stability and uniformity. By comparison between flight data and simulated results, it is found that the model is to be accurate to within 1°C. The model can be better used for predicting and understanding the transient performance of TCLHP.

  7. Controlling Small Fixed Wing UAVs to Optimize Image Quality from On-Board Cameras

    Science.gov (United States)

    Jackson, Stephen Phillip

    Small UAVs have shown great promise as tools for collecting aerial imagery both quickly and cheaply. Furthermore, using a team of small UAVs, as opposed to one large UAV, has shown promise as being a cheaper, faster and more robust method for collecting image data over a large area. Unfortunately, the autonomy of small UAVs has not yet reached the point where they can be relied upon to collect good aerial imagery without human intervention, or supervision. The work presented here intends to increase the level of autonomy of small UAVs so that they can independently, and reliably collect quality aerial imagery. The main contribution of this paper is a novel approach to controlling small fixed wing UAVs that optimizes the quality of the images captured by cameras on board the aircraft. This main contribution is built on three minor contributions: a kinodynamic motion model for small fixed wing UAVs, an iterative Gaussian sampling strategy for rapidly exploring random trees, and a receding horizon, nonlinear model predictive controller for controlling a UAV's sensor footprint. The kinodynamic motion model is built on the traditional unicycle model of an aircraft. In order to create dynamically feasible paths, the kinodynamic motion model augments the kinetic unicycle model by adding a first order estimate of the aircraft's roll dynamics. Experimental data is presented that not only validates this novel kinodynamic motion model, but also shows a 25% improvement over the traditional unicycle model. A novel Gaussian biased sampling strategy is presented for building a rapidly exploring random tree that quickly iterates to a near optimal path. This novel sampling strategy does not require a method for calculating the nearest node to a point, which means that it runs much faster than the traditional RRT algorithm, but it still results in a Gaussian distribution of nodes. Furthermore, because it uses the kinodynamic motion model, the near optimal path it generates is, by

  8. Normative values for a video-force plate assessment of postural control in athletic children.

    Science.gov (United States)

    Howell, David R; Meehan, William P

    2016-07-01

    The objective of this study was to provide normative data for young athletes during the three stances of the modified Balance Error Scoring System (mBESS) using an objective video-force plate system. Postural control was measured in 398 athletes between 8 and 18 years of age during the three stances of the mBESS using a video-force plate rating system. Girls exhibited better postural control than boys during each stance of the mBESS. Age was not significantly associated with postural control. We provide normative data for a video-force plate assessment of postural stability in pediatric athletes during the three stances of the mBESS.

  9. Live lecture versus video podcast in undergraduate medical education: A randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Fukuta Junaid

    2010-10-01

    Full Text Available Abstract Background Information technology is finding an increasing role in the training of medical students. We compared information recall and student experience and preference after live lectures and video podcasts in undergraduate medical education. Methods We performed a crossover randomised controlled trial. 100 students were randomised to live lecture or video podcast for one clinical topic. Live lectures were given by the same instructor as the narrator of the video podcasts. The video podcasts comprised Powerpoint™ slides narrated using the same script as the lecture. They were then switched to the other group for a second clinical topic. Knowledge was assessed using multiple choice questions and qualitative information was collected using a questionnaire. Results No significant difference was found on multiple choice questioning immediately after the session. The subjects enjoyed the convenience of the video podcast and the ability to stop, review and repeat it, but found it less engaging as a teaching method. They expressed a clear preference for the live lecture format. Conclusions We suggest that video podcasts are not ready to replace traditional teaching methods, but may have an important role in reinforcing learning and aiding revision.

  10. Live lecture versus video podcast in undergraduate medical education: A randomised controlled trial

    Science.gov (United States)

    2010-01-01

    Background Information technology is finding an increasing role in the training of medical students. We compared information recall and student experience and preference after live lectures and video podcasts in undergraduate medical education. Methods We performed a crossover randomised controlled trial. 100 students were randomised to live lecture or video podcast for one clinical topic. Live lectures were given by the same instructor as the narrator of the video podcasts. The video podcasts comprised Powerpoint™ slides narrated using the same script as the lecture. They were then switched to the other group for a second clinical topic. Knowledge was assessed using multiple choice questions and qualitative information was collected using a questionnaire. Results No significant difference was found on multiple choice questioning immediately after the session. The subjects enjoyed the convenience of the video podcast and the ability to stop, review and repeat it, but found it less engaging as a teaching method. They expressed a clear preference for the live lecture format. Conclusions We suggest that video podcasts are not ready to replace traditional teaching methods, but may have an important role in reinforcing learning and aiding revision. PMID:20932302

  11. Metabolic responses of upper-body accelerometer-controlled video games in adults.

    Science.gov (United States)

    Stroud, Leah C; Amonette, William E; Dupler, Terry L

    2010-10-01

    Historically, video games required little physical exertion, but new systems utilize handheld accelerometers that require upper-body movement. It is not fully understood if the metabolic workload while playing these games is sufficient to replace routine physical activity. The purpose of this study was to quantify metabolic workloads and estimate caloric expenditure while playing upper-body accelerometer-controlled and classic seated video games. Nineteen adults completed a peak oxygen consumption treadmill test followed by an experimental session where exercising metabolism and ventilation were measured while playing 3 video games: control (CON), low activity (LOW) and high activity (HI). Resting metabolic measures (REST) were also acquired. Caloric expenditure was estimated using the Weir equation. Mean oxygen consumption normalized to body weight for HI condition was greater than LOW, CON, and REST. Mean oxygen consumption normalized to body weight for LOW condition was also greater than CON and REST. Mean exercise intensities of oxygen consumption reserve for HI, LOW, and CON were 25.8% ± 5.1%, 6.4% ± 4.8%, and 0.8% ± 2.4%, respectively. Estimated caloric expenditure during the HI was significantly related to aerobic fitness, but not during other conditions. An active video game significantly elevated oxygen consumption and heart rate, but the increase was dependent on the type of game. The mean oxygen consumption reserve during the HI video game was below recommended international standards for moderate and vigorous activity. Although upper-body accelerometer-controlled video games provided a greater exercising stimulus than classic seated video games, these data suggest they should not replace routine moderate or vigorous exercise.

  12. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  13. Current-Loop Control for the Pitching Axis of Aerial Cameras via an Improved ADRC

    Directory of Open Access Journals (Sweden)

    BingYou Liu

    2017-01-01

    Full Text Available An improved active disturbance rejection controller (ADRC is designed to eliminate the influences of the current-loop for the pitching axis control system of an aerial camera. The improved ADRC is composed of a tracking differentiator (TD, an improved extended state observer (ESO, an improved nonlinear state error feedback (NLSEF, and a disturbance compensation device (DCD. The TD is used to arrange transient process. The improved ESO is utilized to observe the state extended by nonlinear dynamics, model uncertainty, and external disturbances. Overtime variation of the current-loop can be predicted by the improved ESO. The improved NLSEF is adopted to restrain the residual errors of the current-loop. The DCD is used to compensate the overtime variation of the current-loop in real time. The improved ADRC is designed based on a new nonlinear function newfal(·. This function exhibits enhanced continuity and smoothness compared to previously available nonlinear functions. Thus, the new nonlinear function can effectively decrease the high-frequency flutter phenomenon. The improved ADRC exhibits improved control performance, and disturbances of the current-loop can be eliminated by the improved ADRC. Finally, simulation experiments are performed. Results show that the improved ADRC displayed better performance than the proportional integral (PI control strategy and traditional ADRC.

  14. Geometric Calibration of ZIYUAN-3 Three-Line Cameras Combining Ground Control Points and Lines

    Science.gov (United States)

    Cao, Jinshan; Yuan, Xiuxiao; Gong, Jianya

    2016-06-01

    Due to the large biases between the laboratory-calibrated values of the orientation parameters and their in-orbit true values, the initial direct georeferencing accuracy of the Ziyuan-3 (ZY-3) three-line camera (TLC) images can only reach the kilometre level. In this paper, a point-based geometric calibration model of the ZY-3 TLCs is firstly established by using the collinearity constraint, and then a line-based geometric calibration model is established by using the coplanarity constraint. With the help of both the point-based and the line-based models, a feasible in-orbit geometric calibration approach for the ZY-3 TLCs combining ground control points (GCPs) and ground control lines (GCLs) is presented. Experimental results show that like GCPs, GCLs can also provide effective ground control information for the geometric calibration of the ZY-3 TLCs. The calibration accuracy of the look angles of charge-coupled device (CCD) detectors achieved by using the presented approach reached up to about 1.0''. After the geometric calibration, the direct georeferencing accuracy of the ZY-3 TLC images without ground controls was significantly improved from the kilometre level to better than 11 m in planimetry and 9 m in height. A more satisfactory georeferencing accuracy of better than 3.5 m in planimetry and 3.0 m in height was achieved after the block adjustment with four GCPs.

  15. Streaming and congestion control using scalable video coding based on H.264/AVC

    Institute of Scientific and Technical Information of China (English)

    NGUYEN Dieu Thanh; OSTERMANN Joern

    2006-01-01

    This paper presents a streaming system using scalable video coding based on H.264/AVC. The system provides a congestion control algorithm supported by channel bandwidth estimation of the client. It uses retransmission only for packets of the base layer to disburden the congested network. The bandwidth estimation allows for adjusting the transmission rate quickly to the current available bandwidth of the network. Compared to binomial congestion control, the proposed system allows for shorter start-up times and data rate adaptation. The paper describes the components of this streaming system and the results of experiments showing that the proposed approach works effectively for streaming video.

  16. 基于单摄像机视频的鱼类三维自动跟踪方法初探%Preliminary studies on an automated 3D fish tracking method based on a single video camera

    Institute of Scientific and Technical Information of China (English)

    徐盼麟; 韩军; 童剑锋

    2012-01-01

    为提高鱼类行为学数据的提取效率,实验提出了一种基于单摄像机的鱼类三维观测方法,将防水镜面安装在实验用鱼缸上方,模拟一台由上向下拍摄的摄像机,实现了单摄像机三维成像.同时运用多目标跟踪的IMMJPDA(interacting mulliple model joint probabilistic data association)算法,实现了鱼类运动的三维实时自动跟踪,并通过摄像机倾斜矫正和摄像机标定提高了测量精度.通过对6条红鼻剪刀鱼的跟踪,实验结果显示:本方法可正确区分、提取和跟踪鱼群个体以及它们的镜像,自动输出鱼的三维坐标、实时速度、方向等参数,并生成完整的鱼类行为三维轨迹图.%The study of fish behavior lays an important foundation for comprehending of fish migration routes, improving fishing efficiency and the protection of fishery resources. A large number of data are necessary in the study, such as stress response, cluster, migration and other measured data. However, getting these data is a time-consuming process. As fish behavior is often recorded in the form of video and a stereo camera recording system is popularly used for measurement and observation in the laboratory study, how to extract the data of fish behavior efficiently from the video has been a major problem in the study of fish behavior. By far fish 3D coordinate is usually calculated by hand, or by self made software which turns an importing fish 2D coordinate into a 3D one. In order to improve fish behavior data extraction efficiency, this paper presents an automated 3D fish tracking method based on a single video camera. A waterproof mirror was set above the experimental aquaria to simulate a camera shooting from the top, which provided a way to use a single camera for 3D imaging. We extract the data of fish behavior automatically by 3D fish tracking method which is divided into four parts: distortion calibration of single camera system, transfer formula between image

  17. Prompt gamma imaging with a slit camera for real-time range control in proton therapy.

    Science.gov (United States)

    Smeets, J; Roellinghoff, F; Prieels, D; Stichelbaut, F; Benilov, A; Busca, P; Fiorini, C; Peloso, R; Basilavecchia, M; Frizzi, T; Dehaes, J C; Dubus, A

    2012-06-07

    Treatments delivered by proton therapy are affected by uncertainties on the range of the beam within the patient, requiring medical physicists to add safety margins on the penetration depth of the beam. To reduce these margins and deliver safer treatments, different projects are currently investigating real-time range control by imaging prompt gammas emitted along the proton tracks in the patient. This study reports on the feasibility, development and test of a new concept of prompt gamma camera using a slit collimator to obtain a one-dimensional projection of the beam path on a scintillation detector. This concept was optimized, using the Monte Carlo code MCNPX version 2.5.0, to select high energy photons correlated with the beam range and detect them with both high statistics and sufficient spatial resolution. To validate the Monte Carlo model, spectrometry measurements of secondary particles emitted by a PMMA target during proton irradiation at 160 MeV were realized. An excellent agreement with the simulations was observed when using subtraction methods to isolate the gammas in direct incidence. A first prototype slit camera using the HiCam gamma detector was consequently prepared and tested successfully at 100 and 160 MeV beam energies. Results confirmed the potential of this concept for real-time range monitoring with millimetre accuracy in pencil beam scanning mode for typical clinical conditions. If we neglect electronic dead times and rejection of detected events, the current solution with its collimator at 15 cm from the beam axis can achieve a 1-2 mm standard deviation on range estimation in a homogeneous PMMA target for numbers of protons that correspond to doses in water at the Bragg peak as low as 15 cGy at 100 MeV and 25 cGy at 160 MeV assuming pencil beams with a Gaussian profile of 5 mm sigma at target entrance.

  18. ESO adaptive optics NGSD/LGSD detector and camera controller for the E-ELT

    Science.gov (United States)

    Reyes-Moreno, Javier; Downing, Mark; Di Lieto, Nicola

    2016-07-01

    This paper presents the development of the ESO prototype detector controller for the Adaptive Optics imager on the E-ELT which is based on the e2v Natural Guide Star Detector (NGSD) and Laser Guide Star Detector (LGSD). Both NGSD and LGSD are prototype detectors aiming at proving the CMOS technology in the context of the requirement for a Large Visible AO WFS Detector for the E-ELT. NGSD is a custom design CMOS array detector of 880×840 pixels organized as 44×42 sub-apertures of 20×20 pixel each. NGSD is exactly 1/4 of the LGSD and therefore it is considered a scaled down demonstrator for the LGSD. The detector controller requirements present important challenges in the design of the electronics due to the low-power, low-noise and high parallel data rate of the detectors involved. The general architecture of the controller, the front-end electronics to drive and read-out the detector along with the camera design are described here. This electronics is based on advanced Xilinx FPGAs.

  19. Do Speed Cameras Reduce Collisions?

    OpenAIRE

    Skubic, Jeffrey; Johnson, Steven B.; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods – before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not indepe...

  20. Do speed cameras reduce collisions?

    Science.gov (United States)

    Skubic, Jeffrey; Johnson, Steven B; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods - before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not independently affect the incidence of motor vehicle collisions.

  1. Video-Text Processing by Using Motorola 68020 CPU and its Environment

    Science.gov (United States)

    1991-03-01

    accomplished utilizing microcom- puter controlled smart video cameras or digital editing devices currently available. In recent years, video technology and...Address cpocc Branch Conditionally LINK Link and Allocate cpDBcc Test Coprocessor Condition, LSL LSR Logica Shift Left and Right Decrement and Branch

  2. A theory-based video messaging mobile phone intervention for smoking cessation: randomized controlled trial.

    Science.gov (United States)

    Whittaker, Robyn; Dorey, Enid; Bramley, Dale; Bullen, Chris; Denny, Simon; Elley, C Raina; Maddison, Ralph; McRobbie, Hayden; Parag, Varsha; Rodgers, Anthony; Salmon, Penny

    2011-01-21

    Advances in technology allowed the development of a novel smoking cessation program delivered by video messages sent to mobile phones. This social cognitive theory-based intervention (called "STUB IT") used observational learning via short video diary messages from role models going through the quitting process to teach behavioral change techniques. The objective of our study was to assess the effectiveness of a multimedia mobile phone intervention for smoking cessation. A randomized controlled trial was conducted with 6-month follow-up. Participants had to be 16 years of age or over, be current daily smokers, be ready to quit, and have a video message-capable phone. Recruitment targeted younger adults predominantly through radio and online advertising. Registration and data collection were completed online, prompted by text messages. The intervention group received an automated package of video and text messages over 6 months that was tailored to self-selected quit date, role model, and timing of messages. Extra messages were available on demand to beat cravings and address lapses. The control group also set a quit date and received a general health video message sent to their phone every 2 weeks. The target sample size was not achieved due to difficulty recruiting young adult quitters. Of the 226 randomized participants, 47% (107/226) were female and 24% (54/226) were Maori (indigenous population of New Zealand). Their mean age was 27 years (SD 8.7), and there was a high level of nicotine addiction. Continuous abstinence at 6 months was 26.4% (29/110) in the intervention group and 27.6% (32/116) in the control group (P = .8). Feedback from participants indicated that the support provided by the video role models was important and appreciated. This study was not able to demonstrate a statistically significant effect of the complex video messaging mobile phone intervention compared with simple general health video messages via mobile phone. However, there was

  3. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  4. Underwater camera with depth measurement

    Science.gov (United States)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  5. SEFIS Video Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is a fishery-independent survey that collects data on reef fish in southeast US waters using multiple gears, including chevron traps, video cameras, ROVs,...

  6. The 3D Human Motion Control Through Refined Video Gesture Annotation

    Science.gov (United States)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  7. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees

    Directory of Open Access Journals (Sweden)

    Paula Jimena Ramos Giraldo

    2017-04-01

    Full Text Available Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  8. Randomized Controlled Trial of Video Self-Modeling Following Speech Restructuring Treatment for Stuttering

    Science.gov (United States)

    Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark

    2010-01-01

    Purpose: In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. Method: The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech…

  9. 78 FR 57414 - Certain Video Game Systems and Wireless Controllers and Components Thereof, Commission...

    Science.gov (United States)

    2013-09-18

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof, Commission Determination Finding No Violation of the Tariff Act of 1930 AGENCY: U.S. International Trade Commission....

  10. 75 FR 68379 - In the Matter of: Certain Video Game Systems and Controllers; Notice of Investigation

    Science.gov (United States)

    2010-11-05

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION In the Matter of: Certain Video Game Systems and Controllers; Notice of Investigation AGENCY: U.S. International Trade Commission. ACTION: Institution of investigation pursuant to 19 U.S.C. 1337. SUMMARY:...

  11. Video Demo: Deep Reinforcement Learning for Coordination in Traffic Light Control

    NARCIS (Netherlands)

    van der Pol, E.; Oliehoek, F.A.; Bosse, T.; Bredeweg, B.

    2016-01-01

    This video demonstration contrasts two approaches to coordination in traffic light control using reinforcement learning: earlier work, based on a deconstruction of the state space into a linear combination of vehicle states, and our own approach based on the Deep Q-learning algorithm.

  12. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    Science.gov (United States)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  13. Power-Constrained Fuzzy Logic Control of Video Streaming over a Wireless Interconnect

    Directory of Open Access Journals (Sweden)

    Mohammed Ghanbari

    2008-06-01

    Full Text Available Wireless communication of video, with Bluetooth as an example, represents a compromise between channel conditions, display and decode deadlines, and energy constraints. This paper proposes fuzzy logic control (FLC of automatic repeat request (ARQ as a way of reconciling these factors, with a 40% saving in power in the worst channel conditions from economizing on transmissions when channel errors occur. Whatever the channel conditions are, FLC is shown to outperform the default Bluetooth scheme and an alternative Bluetooth-adaptive ARQ scheme in terms of reduced packet loss and delay, as well as improved video quality.

  14. Performance Analysis of Video PHY Controller Using Unidirection and Bi-directional IO Standard via 7 Series FPGA

    DEFF Research Database (Denmark)

    Das, Bhagwan; Abdullah, M F L; Hussain, Dil muhammed Akbar

    2017-01-01

    graphics consumes more power, this creates a need of designing the low power design for Video PHY controller. In this paper, the performance of Video PHY controller is analyzed by comparing the power consumption of unidirectional and bi-directional IO Standard over 7 series FPGA. It is determined...

  15. Teaching residents pediatric fiberoptic intubation of the trachea: traditional fiberscope with an eyepiece versus a video-assisted technique using a fiberscope with an integrated camera.

    Science.gov (United States)

    Wheeler, Melissa; Roth, Andrew G; Dsida, Richard M; Rae, Bronwyn; Seshadri, Roopa; Sullivan, Christine L; Heffner, Corri L; Coté, Charles J

    2004-10-01

    The authors' hypothesis was that a video-assisted technique should speed resident skill acquisition for flexible fiberoptic oral tracheal intubation (FI) of pediatric patients because the attending anesthesiologist can provide targeted instruction when sharing the view of the airway as the resident attempts intubation. Twenty Clinical Anesthesia year 2 residents, novices in pediatric FI, were randomly assigned to either the traditional group (traditional eyepiece FI) or the video group (video-assisted FI). One of two attending anesthesiologists supervised each resident during FI of 15 healthy children, aged 1-6 yr. The time from mask removal to confirmation of endotracheal tube placement by end-tidal carbon dioxide detection was recorded. Intubation attempts were limited to 3 min; up to three attempts were allowed. The primary outcome measure, time to success or failure, was compared between groups. Failure rate and number of attempts were also compared between groups. Three hundred patient intubations were attempted; eight failed. On average, the residents in the video group were faster, were three times more likely to successfully intubate at any given time during an attempt, and required fewer attempts per patient compared to those in the traditional group. The video system seems to be superior for teaching residents fiberoptic intubation in children.

  16. Technology consumption and cognitive control: Contrasting action video game experience with media multitasking.

    Science.gov (United States)

    Cardoso-Leite, Pedro; Kludt, Rachel; Vignola, Gianluca; Ma, Wei Ji; Green, C Shawn; Bavelier, Daphne

    2016-01-01

    Technology has the potential to impact cognition in many ways. Here we contrast two forms of technology usage: (1) media multitasking (i.e., the simultaneous consumption of multiple streams of media, such a texting while watching TV) and (2) playing action video games (a particular subtype of video games). Previous work has outlined an association between high levels of media multitasking and specific deficits in handling distracting information, whereas playing action video games has been associated with enhanced attentional control. Because these two factors are linked with reasonably opposing effects, failing to take them jointly into account may result in inappropriate conclusions as to the impacts of technology use on attention. Across four tasks (AX-continuous performance, N-back, task-switching, and filter tasks), testing different aspects of attention and cognition, we showed that heavy media multitaskers perform worse than light media multitaskers. Contrary to previous reports, though, the performance deficit was not specifically tied to distractors, but was instead more global in nature. Interestingly, participants with intermediate levels of media multitasking sometimes performed better than both light and heavy media multitaskers, suggesting that the effects of increasing media multitasking are not monotonic. Action video game players, as expected, outperformed non-video-game players on all tasks. However, surprisingly, this was true only for participants with intermediate levels of media multitasking, suggesting that playing action video games does not protect against the deleterious effect of heavy media multitasking. Taken together, these findings show that media consumption can have complex and counterintuitive effects on attentional control.

  17. Technology consumption and cognitive control: Contrasting action video game experience with media multitasking

    Science.gov (United States)

    Cardoso-Leite, Pedro; Kludt, Rachel; Vignola, Gianluca; Ma, Wei Ji; Green, C. Shawn; Bavelier, Daphne

    2015-01-01

    Technology has the potential to impact cognition in many ways. Here we contrast two forms of technology usage: 1) media multitasking (i.e., the simultaneous consumption of multiple streams of media, such a texting while watching TV) and 2) playing action video games (a particular sub-type of video game). Previous work has outlined an association between high levels of media multitasking and specific deficits in handling distracting information, while playing action video games has been associated with enhanced attentional control. As these two factors are linked with reasonably opposing effects, failing to take them jointly into account may result in inappropriate conclusions as to the impact of technology use on attention. Across four experiments (AX-CPT, N-back, Task-switching and Filter task), testing different aspects of attention and cognition, we show that heavy media multitaskers perform worse than light media multitaskers. Contrary to previous reports though, the performance deficit was not specifically tied to distractors, but was instead more global in nature. Interestingly, participants with intermediate levels of media multitasking occasionally performed better than both light and heavy media multitaskers suggesting that the effects of increasing media multitasking are not monotonic. Action video game players, as expected, outperformed non-video game players on all tasks. However, surprisingly this was true only for participants with intermediate levels of media multitasking, suggesting that playing action video games does not protect against the deleterious effect of heavy media multitasking. Taken together this study shows that media consumption can have complex and counter-intuitive effects on attentional control. PMID:26474982

  18. Discrete LQ Rate Control for MPEG2 Video Streaming System

    National Research Council Canada - National Science Library

    Xiaofei Zhou; Kenneth Ong

    2008-01-01

    ...) to transmissions of MPEG2 streams in IP networks. We give out the detail deduction for disturbed-DLQ-regulator problem and the corresponding methods to implement the results into media streaming rate control...

  19. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  20. An evaluation of fish behavior upstream of the water temperature control tower at Cougar Dam, Oregon, using acoustic cameras, 2013

    Science.gov (United States)

    Adams, Noah S.; Smith, Collin; Plumb, John M.; Hansen, Gabriel S.; Beeman, John W.

    2015-07-06

    This report describes the initial year of a 2-year study to determine the feasibility of using acoustic cameras to monitor fish movements to help inform decisions about fish passage at Cougar Dam near Springfield, Oregon. Specifically, we used acoustic cameras to measure fish presence, travel speed, and direction adjacent to the water temperature control tower in the forebay of Cougar Dam during the spring (May, June, and July) and fall (September, October, and November) of 2013. Cougar Dam is a high-head flood-control dam, and the water temperature control tower enables depth-specific water withdrawals to facilitate adjustment of water temperatures released downstream of the dam. The acoustic cameras were positioned at the upstream entrance of the tower to monitor free-ranging subyearling and yearling-size juvenile Chinook salmon (Oncorhynchus tshawytscha). Because of the large size discrepancy, we could distinguish juvenile Chinook salmon from their predators, which enabled us to measure predators and prey in areas adjacent to the entrance of the tower. We used linear models to quantify and assess operational and environmental factors—such as time of day, discharge, and water temperature—that may influence juvenile Chinook salmon movements within the beam of the acoustic cameras. Although extensive milling behavior of fish near the structure may have masked directed movement of fish and added unpredictability to fish movement models, the acoustic-camera technology enabled us to ascertain the general behavior of discrete size classes of fish. Fish travel speed, direction of travel, and counts of fish moving toward the water temperature control tower primarily were influenced by the amount of water being discharged through the dam.

  1. Tracing Sequential Video Production

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Khalid, Md. Saifuddin

    2015-01-01

    With an interest in learning that is set in collaborative situations, the data session presents excerpts from video data produced by two of fifteen students from a class of 5th semester techno-anthropology course. Students used video cameras to capture the time they spent working with a scientist...

  2. The effects of self-controlled video feedback on the learning of the basketball set shot

    Directory of Open Access Journals (Sweden)

    Christopher Adam Aiken

    2012-09-01

    Full Text Available Allowing learners to control some aspect of instructional support (e.g., augmented feedback appears to facilitate motor skill acquisition. No studies, however, have examined self-controlled (SC video feedback without the provision of additional attentional cueing. The purpose of this study was to extend previous SC research using video feedback about movement form for the basketball set shot without explicitly directing attention to specific aspects of the movement. The SC group requested video feedback of their performance following any trial during the acquisition phase. The yoked (YK group received feedback according to a schedule created by a SC counterpart. During acquisition participants were also allowed to view written instructional cues at any time. Results revealed that the SC group had significantly higher form scores during the transfer phase and utilized the instructional cues more frequently during acquisition. Post-training questionnaire responses indicated no preference for requesting or receiving feedback following good trials as reported by Chiviacowsky and Wulf (2002, 2005. The nature of the task was such that participants could have assigned both positive and negative evaluations to different aspects of the movement during the same trial. Thus, the lack of preferences along with the similarity in scores for feedback and no-feedback trials may simply have reflected this complexity. Importantly, however, the results indicated that SC video feedback conferred a learning benefit without the provision of explicit additional attentional cueing.

  3. Action video games and improved attentional control: Disentangling selection- and response-based processes.

    Science.gov (United States)

    Chisholm, Joseph D; Kingstone, Alan

    2015-10-01

    Research has demonstrated that experience with action video games is associated with improvements in a host of cognitive tasks. Evidence from paradigms that assess aspects of attention has suggested that action video game players (AVGPs) possess greater control over the allocation of attentional resources than do non-video-game players (NVGPs). Using a compound search task that teased apart selection- and response-based processes (Duncan, 1985), we required participants to perform an oculomotor capture task in which they made saccades to a uniquely colored target (selection-based process) and then produced a manual directional response based on information within the target (response-based process). We replicated the finding that AVGPs are less susceptible to attentional distraction and, critically, revealed that AVGPs outperform NVGPs on both selection-based and response-based processes. These results not only are consistent with the improved-attentional-control account of AVGP benefits, but they suggest that the benefit of action video game playing extends across the full breadth of attention-mediated stimulus-response processes that impact human performance.

  4. A Fuzzy Control System for Inductive Video Games

    OpenAIRE

    Lara-Alvarez, Carlos; Mitre-Hernandez, Hugo; Flores, Juan; Fuentes, Maria

    2017-01-01

    It has been shown that the emotional state of students has an important relationship with learning; for instance, engaged concentration is positively correlated with learning. This paper proposes the Inductive Control (IC) for educational games. Unlike conventional approaches that only modify the game level, the proposed technique also induces emotions in the player for supporting the learning process. This paper explores a fuzzy system that analyzes the players' performance and their emotion...

  5. Linear array of photodiodes to track a human speaker for video recording

    Science.gov (United States)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  6. Spatial video geonarratives and health: case studies in post-disaster recovery, crime, mosquito control and tuberculosis in the homeless.

    Science.gov (United States)

    Curtis, Andrew; Curtis, Jacqueline W; Shook, Eric; Smith, Steve; Jefferis, Eric; Porter, Lauren; Schuch, Laura; Felix, Chaz; Kerndt, Peter R

    2015-08-08

    A call has recently been made by the public health and medical communities to understand the neighborhood context of a patient's life in order to improve education and treatment. To do this, methods are required that can collect "contextual" characteristics while complementing the spatial analysis of more traditional data. This also needs to happen within a standardized, transferable, easy-to-implement framework. The Spatial Video Geonarrative (SVG) is an environmentally-cued narrative where place is used to stimulate discussion about fine-scale geographic characteristics of an area and the context of their occurrence. It is a simple yet powerful approach to enable collection and spatial analysis of expert and resident health-related perceptions and experiences of places. Participants comment about where they live or work while guiding a driver through the area. Four GPS-enabled cameras are attached to the vehicle to capture the places that are observed and discussed by the participant. Audio recording of this narrative is linked to the video via time stamp. A program (G-Code) is then used to geotag each word as a point in a geographic information system (GIS). Querying and density analysis can then be performed on the narrative text to identify spatial patterns within one narrative or across multiple narratives. This approach is illustrated using case studies on post-disaster psychopathology, crime, mosquito control, and TB in homeless populations. SVG can be used to map individual, group, or contested group context for an environment. The method can also gather data for cohorts where traditional spatial data are absent. In addition, SVG provides a means to spatially capture, map and archive institutional knowledge. SVG GIS output can be used to advance theory by being used as input into qualitative and/or spatial analyses. SVG can also be used to gain near-real time insight therefore supporting applied interventions. Advances over existing geonarrative approaches

  7. 77 FR 58577 - Certain Video Game Systems and Wireless Controllers and Components Thereof; Notice of Request for...

    Science.gov (United States)

    2012-09-21

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof; Notice of Request for... limited exclusion order and a cease and desist order against certain video game systems and...

  8. Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast

    Directory of Open Access Journals (Sweden)

    Tianli Chu

    2003-01-01

    Full Text Available This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM by McCanne based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.

  9. Design, Implementation and Evaluation of Congestion Control Mechanism for Video Streaming

    Directory of Open Access Journals (Sweden)

    Hiroshi Noborio

    2011-05-01

    Full Text Available In recent years, video streaming services over TCP, such as YouTube, have become more and more popular. TCP NewReno, the current TCP standard, performs greedy congestion control, which increases the congestion window size until packet loss occurs. Therefore, because TCP transmits data at a much higher rate than the video playback rate, the probability of packet loss in the network increases, which in turn takes bandwidth from other network traffic. In this paper, we propose a new transport-layer protocol, called TCP Stream, that solves the problem of TCP in video streaming. TCP Stream performs a hybrid congestion control that combines the loss-based congestion control, which uses packet loss as an index of congestion, and the delay-based congestion control, which uses delay as an index of congestion. Simulation and experimental results show that TCP Stream transmits data at the adjusted rate, unlike TCP NewReno, and does not steal bandwidth from of other network traffic.

  10. Positioning of Screw Holes Group Based on Digital Camera and Digital Control Drilling

    Institute of Scientific and Technical Information of China (English)

    FENG Wenhao; LI Jiansong; YAN Li; SU Guozhong; YUAN Xiuxiao; ZHONG Shengzhang; JI Huiming

    2004-01-01

    Positioning of screw holes is an important production procedure for steel construction connecting with bolts. In this paper, a new production method is presented, in which the digital camera is used for taking pictures of screw holes and other techniques are advanced. This paper also indicates that the pixels of CCD chip in photogrammetry should be chosen as all geometric units in an image, such as interior elements and all kinds of distortions. The measure can also simplify the camera calibration for determining the size of non-square pixel.

  11. Cognitive rehabilitation of attention deficits in traumatic brain injury using action video games: A controlled trial

    Directory of Open Access Journals (Sweden)

    Alexandra Vakili

    2016-12-01

    Full Text Available This paper investigates the utility and efficacy of a novel eight-week cognitive rehabilitation programme developed to remediate attention deficits in adults who have sustained a traumatic brain injury (TBI, incorporating the use of both action video game playing and a compensatory skills programme. Thirty-one male TBI patients, aged 18–65 years, were recruited from 2 Australian brain injury units and allocated to either a treatment or waitlist (treatment as usual control group. Results showed improvements in the treatment group, but not the waitlist control group, for performance on the immediate trained task (i.e. the video game and in non-trained measures of attention and quality of life. Neither group showed changes to executive behaviours or self-efficacy. The strengths and limitations of the study are discussed, as are the potential applications and future implications of the research.

  12. High dynamic range (HDR) virtual bronchoscopy rendering for video tracking

    Science.gov (United States)

    Popa, Teo; Choi, Jae

    2007-03-01

    In this paper, we present the design and implementation of a new rendering method based on high dynamic range (HDR) lighting and exposure control. This rendering method is applied to create video images for a 3D virtual bronchoscopy system. One of the main optical parameters of a bronchoscope's camera is the sensor exposure. The exposure adjustment is needed since the dynamic range of most digital video cameras is narrower than the high dynamic range of real scenes. The dynamic range of a camera is defined as the ratio of the brightest point of an image to the darkest point of the same image where details are present. In a video camera exposure is controlled by shutter speed and the lens aperture. To create the virtual bronchoscopic images, we first rendered a raw image in absolute units (luminance); then, we simulated exposure by mapping the computed values to the values appropriate for video-acquired images using a tone mapping operator. We generated several images with HDR and others with low dynamic range (LDR), and then compared their quality by applying them to a 2D/3D video-based tracking system. We conclude that images with HDR are closer to real bronchoscopy images than those with LDR, and thus, that HDR lighting can improve the accuracy of image-based tracking.

  13. 关于摄像技巧在新闻摄像中的应用研究%Application research of camera skills in news video

    Institute of Scientific and Technical Information of China (English)

    王庆欣

    2016-01-01

    社会进步以及时代更新,在该种环境背景之下新闻媒体得到了良好发展,政府通过媒体传达法律以及行政措施等,而社会大众则通过媒体实现民意诉求等,可以说新闻媒体无论是基于社会发展还是基于大众生活均起到了积极的影响作用,从新闻媒体本质上讲要想将多方面信息包含其中就需要应用到一定摄像技巧。本文基于此就新闻摄像重要内涵意义进行着手分析,之后对日常新闻应用的主要摄像技巧予以探讨,以期为后续关于摄像技巧方面的研究提供理论上的参考依据,更为未来新闻摄像优化发展献出自己一份研究力量。%Along with the social progress and era development, the news media to get a good development. The government through the media to communicate the legal and administrative measures, the public was completed through the media public opinion demands, etc., to say the news media both to the social development and to have played a positive role in affecting People's Daily life. In essence, from the news media want to include various information, requires a certain camera technique. This paper analyzes the significance of news camera important connotation, discusses the daily application of the main camera technique, in order to provide theoretical reference to the research of subsequent camera technique, gave his news camera optimization for the future development a research strength.

  14. Mobile-Based Video Learning Outcomes in Clinical Nursing Skill Education: A Randomized Controlled Trial.

    Science.gov (United States)

    Lee, Nam-Ju; Chae, Sun-Mi; Kim, Haejin; Lee, Ji-Hye; Min, Hyojin Jennifer; Park, Da-Eun

    2016-01-01

    Mobile devices are a regular part of daily life among the younger generations. Thus, now is the time to apply mobile device use to nursing education. The purpose of this study was to identify the effects of a mobile-based video clip on learning motivation, competence, and class satisfaction in nursing students using a randomized controlled trial with a pretest and posttest design. A total of 71 nursing students participated in this study: 36 in the intervention group and 35 in the control group. A video clip of how to perform a urinary catheterization was developed, and the intervention group was able to download it to their own mobile devices for unlimited viewing throughout 1 week. All of the students participated in a practice laboratory to learn urinary catheterization and were blindly tested for their performance skills after participation in the laboratory. The intervention group showed significantly higher levels of learning motivation and class satisfaction than did the control. Of the fundamental nursing competencies, the intervention group was more confident in practicing catheterization than their counterparts. Our findings suggest that video clips using mobile devices are useful tools that educate student nurses on relevant clinical skills and improve learning outcomes.

  15. Optimal source rate control for adapting VBR video over CBR channels

    Institute of Scientific and Technical Information of China (English)

    Chunwen LI; Peng ZHU

    2006-01-01

    In this paper we discuss the source rate control problem of adapting variable bit-rate (VBR) compressed video over constant bit-rate (CBR) channels. Firstly we formulate it as an optimal control problem of a discrete linear system with state and control constraints. Then we apply the discrete maximum principle to get the optimal solution.Experimental results are given in the end. Compared with traditional algorithms, the proposed algorithm is suitable for the coder with continuous output rates, and can achieve the better solution. Our algorithm can be used in both off-line and on-line coding.

  16. A brief report on the relationship between self-control, video game addiction and academic achievement in normal and ADHD students

    OpenAIRE

    Haghbin, Maryam; Shaterian, Fatemeh; Hosseinzadeh, Davood; Griffiths, Mark D.

    2013-01-01

    Background and aims: Over the last two decades, research into video game addiction has grown increasingly. The present research aimed to examine the relationship between video game addiction, self-control, and academic achievement of normal and ADHD high school students. Based on previous research it was hypothesized that (i) there would be a relationship between video game addiction, self-control and academic achievement (ii) video game addiction, self-control and academic achievement would ...

  17. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors

    Directory of Open Access Journals (Sweden)

    Abdelkader Nasreddine Belkacem

    2015-01-01

    Full Text Available EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.

  18. Design of Hardware for CCD Video Surveillance Camera with External Synchronization Function%具有外同步功能的 CCD 视频监控摄像机硬件设计

    Institute of Scientific and Technical Information of China (English)

    国蓉; 焦旸; 高明; 杜玉军

    2014-01-01

    为了解决目标追踪中不同摄像机间识别目标的差异,在普通摄像机的基础上开发外同步功能。使用专用DSP微处理器CXD3172AR加EEPROM 的驱动方案,利用同步信号发生器和锁相环电路实现外同步功能。设计了信号处理电路,实现清晰的图像输出。实验结果表明,视频监控摄像机输出视频信号稳定,实现外同步功能,图像分辨率达752×528,输出帧率50帧/秒。%In order to solve the problem that the target identified by different cameras was discrepant during the process of tracking ,the external synchronization function is developed based on the common camera .Also ,a program is proposed ,taking CXD3172AR ,a microprocessor specially used in DSP and EEPROM as driver ,then using synchronous signal generator and phase locked loop circuit ,to achieve external synchronization .Meanwhile a signal processing circuit is designed to achieve the output of digital format . T he experimental results show that the video surveillance camera had the features of stable output signal ,external synchronization ,image resolution up to 752 × 528 and output frame rate up to 50 frames/s .

  19. Video sensor with range measurement capability

    Science.gov (United States)

    Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Howard, Richard T. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  20. An Innovative Streaming Video System With a Point-of-View Head Camera Transmission of Surgeries to Smartphones and Tablets: An Educational Utility.

    Science.gov (United States)

    Chaves, Rafael Oliveira; de Oliveira, Pedro Armando Valente; Rocha, Luciano Chaves; David, Joacy Pedro Franco; Ferreira, Sanmari Costa; Santos, Alex de Assis Santos Dos; Melo, Rômulo Müller Dos Santos; Yasojima, Edson Yuzur; Brito, Marcus Vinicius Henriques

    2017-10-01

    In order to engage medical students and residents from public health centers to utilize the telemedicine features of surgery on their own smartphones and tablets as an educational tool, an innovative streaming system was developed with the purpose of streaming live footage from open surgeries to smartphones and tablets, allowing the visualization of the surgical field from the surgeon's perspective. The current study aims to describe the results of an evaluation on level 1 of Kirkpatrick's Model for Evaluation of the streaming system usage during gynecological surgeries, based on the perception of medical students and gynecology residents. Consisted of a live video streaming (from the surgeon's point of view) of gynecological surgeries for smartphones and tablets, one for each volunteer. The volunteers were able to connect to the local wireless network, created by the streaming system, through an access password and watch the video transmission on a web browser on their smartphones. Then, they answered a Likert-type questionnaire containing 14 items about the educational applicability of the streaming system, as well as comparing it to watching an in loco procedure. This study is formally approved by the local ethics commission (Certificate No. 53175915.7.0000.5171/2016). Twenty-one volunteers participated, totalizing 294 items answered, in which 94.2% were in agreement with the items affirmative, 4.1% were neutral, and only 1.7% answers corresponded to negative impressions. Cronbach's α was .82, which represents a good reliability level. Spearman's coefficients were highly significant in 4 comparisons and moderately significant in the other 20 comparisons. This study presents a local streaming video system of live surgeries to smartphones and tablets and shows its educational utility, low cost, and simple usage, which offers convenience and satisfactory image resolution, thus being potentially applicable in surgical teaching.

  1. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many ca

  2. Visual Acuity and Contrast Sensitivity with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2009-01-01

    Video of Visual Acuity (VA) and Contrast Sensitivity (CS) test charts in a complex background was recorded using a CCD camera mounted on a computer-controlled tripod and fed into real-time MPEG2 compression/decompression equipment. The test charts were based on the Triangle Orientation

  3. Stereoscopic High Dynamic Range Video

    OpenAIRE

    Rüfenacht, Dominic

    2011-01-01

    Stereoscopic video content is usually being created by using two or more cameras which are recording the same scene. Traditionally, those cameras have the exact same intrinsic camera parameters. In this project, the exposure times of the cameras differ, allowing to record different parts of the dynamic range of the scene. Image processing techniques are then used to enhance the dynamic range of the captured data. A pipeline for the recording, processing, and displaying of high dynamic range (...

  4. 76 FR 23624 - In the Matter of Certain Video Game Systems and Wireless Controllers and Components Thereof...

    Science.gov (United States)

    2011-04-27

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION In the Matter of Certain Video Game Systems and Wireless Controllers and Components Thereof; Notice of Institution of Investigation AGENCY: U.S. International Trade Commission. ACTION:...

  5. Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks

    OpenAIRE

    Konda, Krishna Reddy

    2015-01-01

    The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the curren...

  6. 搭载摄像功能的数码单反相机在临床医学教学中的应用%Application of Digital Single Lens Reflex Camera with Digital Video Function in Clinical Medical Teaching

    Institute of Scientific and Technical Information of China (English)

    陈敏; 刘珺; 王玲

    2012-01-01

    目的:探讨兼具视频短片拍摄功能的数码单反相机的使用以及动静态影像在临床医疗、诊断以及医学教学中的应用.方法:选择搭载具备摄像功能的高品质数码单反相机如佳能EOS 5DMarkⅡ或EOS 550D等,应用于临床带教的病例及标本等医学影像的拍摄,并针对临床医学摄影的特点辅之相应的基本操作的指导.结果:相对普及型便携式数码相机,搭载具备摄像功能的数码单反相机除了可以拍摄静止医学图像外,还能够拍摄动态的较高质量的短片视频,用以制作医学教学多媒体课件、培训教材或构建网络化的图文工作站等.结论:搭载摄像功能的数码单反相机的应用,提高了临床医师以及带教实习医学生们记录临床病例资料的灵活性以及多样性,丰富了医学影像资料的采录收集,在临床教学中是一个轻便易携带且具备多功能、不可缺少的资料记录工具.%Objective To discuss the usage of digital single lens reflex (DSLR) camera with digital video (DV) function and its static and dynamic images in clinical medicine, teaching and diagnosis. Methods The high quality DSLR camera such as Cannon EOS5DMark Ⅱ or EOS550D was chosen to record video and lake medical photograph, and also some guidance for the basic operation was provided according to the characteristics of clinical medical photography. Results Compared with the simple digital camera, except for taking medical photograph, the DSLR camera with DV function also could record a high quality video or movie which would be used for multi-media, training texts and photography network station and so on. Conclusion It is convenient for doctors and students who study the medicine to apply the DSLR with DV function in clinical medical teaching and diagnosis due to its flexibility and variety on recording the clinical cases and documents. [Chinese Medical Equipment Journal.2012,33(4) :94-95,108

  7. Quality control of radiosurgery: dosimetry with micro camera in spherical mannequin; Control de calidad en radiocirugia: dosimetria con microcamara en maniqui esferico

    Energy Technology Data Exchange (ETDEWEB)

    Casado Villalon, F. J.; Navarro Guirado, F.; Garci Pareja, S.; Benitez Villegas, E. M.; Galan Montenegro, P.; Moreno Saiz, C.

    2013-07-01

    The dosimetry of small field is part of quality control in the treatment of cranial radiosurgery. In this work the results of absorbed dose in the isocenter, Planner, with those obtained from are compared experimentally with a micro-camera into an spherical mannequin. (Author)

  8. Optical signal processing of video surveillance for recognizing and measurement location railway infrastructure elements

    Science.gov (United States)

    Diyazitdinov, Rinat R.; Vasin, Nikolay N.

    2016-03-01

    Processing of optical signals, which are received from CCD sensors of video cameras, allows to extend the functionality of video surveillance systems. Traditional video surveillance systems are used for saving, transmitting and preprocessing of the video content from the controlled objects. Video signal processing by analytics systems allows to get more information about object's location and movement, the flow of technological processes and to measure other parameters. For example, the signal processing of video surveillance systems, installed on carriage-laboratories, are used for getting information about certain parameters of the railways. Two algorithms for video processing, allowing recognition of pedestrian crossings of the railways, as well as location measurement of the so-called "Anchor Marks" used to control the mechanical stresses of continuous welded rail track are described in this article. The algorithms are based on the principle of determining the region of interest (ROI), and then the analysis of the fragments inside this ROI.

  9. Error control in the set-up of stereo camera systems for 3d animal tracking

    Science.gov (United States)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  10. Sending Safety Video over WiMAX in Vehicle Communications

    Directory of Open Access Journals (Sweden)

    Jun Steed Huang

    2013-10-01

    Full Text Available This paper reports on the design of an OPNET simulation platform to test the performance of sending real-time safety video over VANET (Vehicular Adhoc NETwork using the WiMAX technology. To provide a more realistic environment for streaming real-time video, a video model was created based on the study of video traffic traces captured from a realistic vehicular camera, and different design considerations were taken into account. A practical controller over real-time streaming protocol is implemented to control data traffic congestion for future road safety development. Our driving video model was then integrated with the WiMAX OPNET model along with a mobility model based on real road maps. Using this simulation platform, different mobility cases have been studied and the performance evaluated in terms of end-to-end delay, jitter and visual experience.

  11. Efficient Video Transcoding from H.263 to H.264/AVC Standard with Enhanced Rate Control

    Directory of Open Access Journals (Sweden)

    Nguyen Viet-Anh

    2006-01-01

    Full Text Available A new video coding standard H.264/AVC has been recently developed and standardized. The standard represents a number of advances in video coding technology in terms of both coding efficiency and flexibility and is expected to replace the existing standards such as H.263 and MPEG-1/2/4 in many possible applications. In this paper we investigate and present efficient syntax transcoding and downsizing transcoding methods from H.263 to H.264/AVC standard. Specifically, we propose an efficient motion vector reestimation scheme using vector median filtering and a fast intraprediction mode selection scheme based on coarse edge information obtained from integer-transform coefficients. Furthermore, an enhanced rate control method based on a quadratic model is proposed for selecting quantization parameters at the sequence and frame levels together with a new frame-layer bit allocation scheme based on the side information in the precoded video. Extensive experiments have been conducted and the results show the efficiency and effectiveness of the proposed methods.

  12. Fuzzy Logic Control of Adaptive ARQ for Video Distribution over a Bluetooth Wireless Link

    Directory of Open Access Journals (Sweden)

    R. Razavi

    2007-01-01

    Full Text Available Bluetooth's default automatic repeat request (ARQ scheme is not suited to video distribution resulting in missed display and decoded deadlines. Adaptive ARQ with active discard of expired packets from the send buffer is an alternative approach. However, even with the addition of cross-layer adaptation to picture-type packet importance, ARQ is not ideal in conditions of a deteriorating RF channel. The paper presents fuzzy logic control of ARQ, based on send buffer fullness and the head-of-line packet's deadline. The advantage of the fuzzy logic approach, which also scales its output according to picture type importance, is that the impact of delay can be directly introduced to the model, causing retransmissions to be reduced compared to all other schemes. The scheme considers both the delay constraints of the video stream and at the same time avoids send buffer overflow. Tests explore a variety of Bluetooth send buffer sizes and channel conditions. For adverse channel conditions and buffer size, the tests show an improvement of at least 4 dB in video quality compared to nonfuzzy schemes. The scheme can be applied to any codec with I-, P-, and (possibly B-slices by inspection of packet headers without the need for encoder intervention.

  13. The Video Genome

    CERN Document Server

    Bronstein, Alexander M; Kimmel, Ron

    2010-01-01

    Fast evolution of Internet technologies has led to an explosive growth of video data available in the public domain and created unprecedented challenges in the analysis, organization, management, and control of such content. The problems encountered in video analysis such as identifying a video in a large database (e.g. detecting pirated content in YouTube), putting together video fragments, finding similarities and common ancestry between different versions of a video, have analogous counterpart problems in genetic research and analysis of DNA and protein sequences. In this paper, we exploit the analogy between genetic sequences and videos and propose an approach to video analysis motivated by genomic research. Representing video information as video DNA sequences and applying bioinformatic algorithms allows to search, match, and compare videos in large-scale databases. We show an application for content-based metadata mapping between versions of annotated video.

  14. Effects of a Video on Organ Donation Consent Among Primary Care Patients: A Randomized Controlled Trial.

    Science.gov (United States)

    Thornton, J Daryl; Sullivan, Catherine; Albert, Jeffrey M; Cedeño, Maria; Patrick, Bridget; Pencak, Julie; Wong, Kristine A; Allen, Margaret D; Kimble, Linda; Mekesa, Heather; Bowen, Gordon; Sehgal, Ashwini R

    2016-08-01

    Low organ donation rates remain a major barrier to organ transplantation. We aimed to determine the effect of a video and patient cueing on organ donation consent among patients meeting with their primary care provider. This was a randomized controlled trial between February 2013 and May 2014. The waiting rooms of 18 primary care clinics of a medical system in Cuyahoga County, Ohio. The study included 915 patients over 15.5 years of age who had not previously consented to organ donation. Just prior to their clinical encounter, intervention patients (n = 456) watched a 5-minute organ donation video on iPads and then choose a question regarding organ donation to ask their provider. Control patients (n = 459) visited their provider per usual routine. The primary outcome was the proportion of patients who consented for organ donation. Secondary outcomes included the proportion of patients who discussed organ donation with their provider and the proportion who were satisfied with the time spent with their provider during the clinical encounter. Intervention patients were more likely than control patients to consent to donate organs (22 % vs. 15 %, OR 1.50, 95%CI 1.10-2.13). Intervention patients were also more likely to have donation discussions with their provider (77 % vs. 18 %, OR 15.1, 95%CI 11.1-20.6). Intervention and control patients were similarly satisfied with the time they spent with their provider (83 % vs. 86 %, OR 0.87, 95%CI 0.61-1.25). How the observed increases in organ donation consent might translate into a greater organ supply is unclear. Watching a brief video regarding organ donation and being cued to ask a primary care provider a question about donation resulted in more organ donation discussions and an increase in organ donation consent. Satisfaction with the time spent during the clinical encounter was not affected. clinicaltrials.gov Identifier: NCT01697137.

  15. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras

    Science.gov (United States)

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-01-01

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748

  16. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras

    Directory of Open Access Journals (Sweden)

    Zhenyu Li

    2016-08-01

    Full Text Available Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.

  17. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras.

    Science.gov (United States)

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-08-30

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.

  18. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    Science.gov (United States)

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  19. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    Science.gov (United States)

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  20. HONEY -- The Honeywell Camera

    Science.gov (United States)

    Clayton, C. A.; Wilkins, T. N.

    The Honeywell model 3000 colour graphic recorder system (hereafter referred to simply as Honeywell) has been bought by Starlink for producing publishable quality photographic hardcopy from the IKON image displays. Full colour and black & white images can be recorded on positive or negative 35mm film. The Honeywell consists of a built-in high resolution flat-faced monochrome video monitor, a red/green/blue colour filter mechanism and a 35mm camera. The device works on the direct video signals from the IKON. This means that changing the brightness or contrast on the IKON monitor will not affect any photographs that you take. The video signals from the IKON consist of separate red, green and blue signals. When you take a picture, the Honeywell takes the red, green and blue signals in turn and displays three pictures consecutively on its internal monitor. It takes an exposure through each of three filters (red, green and blue) onto the film in the camera. This builds up the complete colour picture on the film. Honeywell systems are installed at nine Starlink sites, namely Belfast (locally funded), Birmingham, Cambridge, Durham, Leicester, Manchester, Rutherford, ROE and UCL.

  1. Multimodal sensing-based camera applications

    Science.gov (United States)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, J. Olli; Vehviläinen, Markku

    2011-02-01

    The increased sensing and computing capabilities of mobile devices can provide for enhanced mobile user experience. Integrating the data from different sensors offers a way to improve application performance in camera-based applications. A key advantage of using cameras as an input modality is that it enables recognizing the context. Therefore, computer vision has been traditionally utilized in user interfaces to observe and automatically detect the user actions. The imaging applications can also make use of various sensors for improving the interactivity and the robustness of the system. In this context, two applications fusing the sensor data with the results obtained from video analysis have been implemented on a Nokia Nseries mobile device. The first solution is a real-time user interface that can be used for browsing large images. The solution enables the display to be controlled by the motion of the user's hand using the built-in sensors as complementary information. The second application is a real-time panorama builder that uses the device's accelerometers to improve the overall quality, providing also instructions during the capture. The experiments show that fusing the sensor data improves camera-based applications especially when the conditions are not optimal for approaches using camera data alone.

  2. Development of the control circuits for the TID-CCD stereo camera of the Chang'E-2 satellite based on FPGAs

    Science.gov (United States)

    Duan, Yong-Qiang; Gao, Wei; Qiao, Wei-Dong; Wen, De-Sheng; Zhao, Bao-Chang

    2013-09-01

    The TDI-CCD Stereo Camera is the optical sensor on the Chang'E-2 (CE-2) satellite created for the Chinese Lunar Exploration Program. The camera was designed to acquire three-dimensional stereoscopic images of the lunar surface based upon three-line array photogrammetric theory. The primary objective of the camera is, (1) to obtain about 1-m pixel spatial resolution images of the preparative landing location from an ellipse orbit at an altitude of ~15km, and (2) to obtain about 7-m pixel spatial resolution global images of the Moon from a circular orbit at an altitude of ~100km. The focal plane of the camera is comprised of two TDI-CCDs. The control circuits of the camera are designed based on two SRAM-type FPGAs, XQR2V3000-4CG717. In this paper, a variable frequency control and multi-tap data readout technology for the TDI-CCD is presented, which is able to change the data processing capabilities according to the different orbit mode for the TDI-CCD stereo camera. By this way, the data rate of the camera is extremely reduced from 100Mbps to 25Mbps at high orbit mode, which is benefit to raise the reliability of the image transfer. The results of onboard flight validate that the proposed methodology is reasonable and reliable.

  3. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  4. Online rate control in digital cameras for near-constant distortion based on minimum/maximum criterion

    Science.gov (United States)

    Lee, Sang-Yong; Ortega, Antonio

    2000-04-01

    We address the problem of online rate control in digital cameras, where the goal is to achieve near-constant distortion for each image. Digital cameras usually have a pre-determined number of images that can be stored for the given memory size and require limited time delay and constant quality for each image. Due to time delay restrictions, each image should be stored before the next image is received. Therefore, we need to define an online rate control that is based on the amount of memory used by previously stored images, the current image, and the estimated rate of future images. In this paper, we propose an algorithm for online rate control, in which an adaptive reference, a 'buffer-like' constraint, and a minimax criterion (as a distortion metric to achieve near-constant quality) are used. The adaptive reference is used to estimate future images and the 'buffer-like' constraint is required to keep enough memory for future images. We show that using our algorithm to select online bit allocation for each image in a randomly given set of images provides near constant quality. Also, we show that our result is near optimal when a minimax criterion is used, i.e., it achieves a performance close to that obtained by applying an off-line rate control that assumes exact knowledge of the images. Suboptimal behavior is only observed in situations where the distribution of images is not truly random (e.g., if most of the 'complex' images are captured at the end of the sequence.) Finally, we propose a T- step delay rate control algorithm and using the result of 1- step delay rate control algorithm, we show that this algorithm removes the suboptimal behavior.

  5. Cameras Monitor Spacecraft Integrity to Prevent Failures

    Science.gov (United States)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  6. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  7. Fast measurement of temporal noise of digital camera's photosensors

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.

    2015-10-01

    Currently photo- and videocameras are widespread parts of both scientific experimental setups and consumer applications. They are used in optics, radiophysics, astrophotography, chemistry, and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photoand videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Spatial part usually several times lower in magnitude than temporal. At first approximation spatial noises might be neglected. Earlier we proposed modification of the automatic segmentation of non-uniform targets (ASNT) method for measurement of temporal noise of photo- and videocameras. Only two frames are sufficient for noise measurement with the modified method. In result, proposed ASNT modification should allow fast and accurate measurement of temporal noise. In this paper, we estimated light and dark temporal noises of four cameras of different types using the modified ASNT method with only several frames. These cameras are: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PLB781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. We measured elapsed time for processing of shots used for temporal noise estimation. The results demonstrate the possibility of fast obtaining of dependency of camera full temporal noise on signal value with the proposed ASNT modification.

  8. Tower Camera

    Data.gov (United States)

    Oak Ridge National Laboratory — The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for...

  9. An Adaptive Video Coding Control Scheme for Real-Time MPEG Applications

    Directory of Open Access Journals (Sweden)

    Hsia Shih-Chang

    2003-01-01

    Full Text Available This paper proposes a new rate control scheme to increase the coding efficiency for MPEG systems. Instead of using a static group of picture (GOP structure, we present an adaptive GOP structure that uses more P- and B-frame coding, while the temporal correlation among the video frames maintains high. When there is a scene change, we immediately insert intramode coding to reduce the prediction error. Moreover, an enhanced prediction frame is used to improve the coding quality in the adaptive GOP. This rate control algorithm can both achieve better coding efficiency and solve the scene change problem. Even if the coding bit rate is over the predefined level, this coding scheme does not require re-encoding for real-time systems. Simulations demonstrate that our proposed algorithm can achieve better quality than TM5, and satisfactory reliability for detecting scene changes.

  10. Internet Protocol Display Sharing Solution for Mission Control Center Video System

    Science.gov (United States)

    Brown, Michael A.

    2009-01-01

    With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole

  11. Cardiac cameras.

    Science.gov (United States)

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  12. Design considerations to improve cognitive ergonomic issues of unmanned vehicle interfaces utilizing video game controllers.

    Science.gov (United States)

    Oppold, P; Rupp, M; Mouloua, M; Hancock, P A; Martin, J

    2012-01-01

    Unmanned (UAVs, UCAVs, and UGVs) systems still have major human factors and ergonomic challenges related to the effective design of their control interface systems, crucial to their efficient operation, maintenance, and safety. Unmanned system interfaces with a human centered approach promote intuitive interfaces that are easier to learn, and reduce human errors and other cognitive ergonomic issues with interface design. Automation has shifted workload from physical to cognitive, thus control interfaces for unmanned systems need to reduce mental workload on the operators and facilitate the interaction between vehicle and operator. Two-handed video game controllers provide wide usability within the overall population, prior exposure for new operators, and a variety of interface complexity levels to match the complexity level of the task and reduce cognitive load. This paper categorizes and provides taxonomy for 121 haptic interfaces from the entertainment industry that can be utilized as control interfaces for unmanned systems. Five categories of controllers were based on the complexity of the buttons, control pads, joysticks, and switches on the controller. This allows the selection of the level of complexity needed for a specific task without creating an entirely new design or utilizing an overly complex design.

  13. Do Motion Controllers Make Action Video Games Less Sedentary? A Randomized Experiment

    Directory of Open Access Journals (Sweden)

    Elizabeth J. Lyons

    2012-01-01

    Full Text Available Sports- and fitness-themed video games using motion controllers have been found to produce physical activity. It is possible that motion controllers may also enhance energy expenditure when applied to more sedentary games such as action games. Young adults (N = 100 were randomized to play three games using either motion-based or traditional controllers. No main effect was found for controller or game pair (P > .12. An interaction was found such that in one pair, motion control (mean [SD] 0.96 [0.20] kcal ⋅ kg-1 ⋅ hr-1 produced 0.10 kcal ⋅ kg-1 ⋅ hr-1 (95% confidence interval 0.03 to 0.17 greater energy expenditure than traditional control (0.86 [0.17] kcal ⋅ kg-1 ⋅ hr-1, P = .048. All games were sedentary. As currently implemented, motion control is unlikely to produce moderate intensity physical activity in action games. However, some games produce small but significant increases in energy expenditure, which may benefit health by decreasing sedentary behavior.

  14. Image Intensifier Modules For Use With Commercially Available Solid State Cameras

    Science.gov (United States)

    Murphy, Howard; Tyler, Al; Lake, Donald W.

    1989-04-01

    A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be

  15. Algorithms for Deterministic Call Admission Control of Pre-stored VBR Video Streams

    Directory of Open Access Journals (Sweden)

    Christos Tryfonas

    2009-08-01

    Full Text Available We examine the problem of accepting a new request for a pre-stored VBR video stream that has been smoothed using any of the smoothing algorithms found in the literature. The output of these algorithms is a piecewise constant-rate schedule for a Variable Bit-Rate (VBR stream. The schedule guarantees that the decoder buffer does not overflow or underflow. The problem addressed in this paper is the determination of the minimal time displacement of each new requested VBR stream so that it can be accommodated by the network and/or the video server without overbooking the committed traffic. We prove that this call-admission control problem for multiple requested VBR streams is NP-complete and inapproximable within a constant factor, by reducing it from the VERTEX COLOR problem. We also present a deterministic morphology-sensitive algorithm that calculates the minimal time displacement of a VBR stream request. The complexity of the proposed algorithm along with the experimental results we provide indicate that the proposed algorithm is suitable for real-time determination of the time displacement parameter during the call admission phase.

  16. Call Admission Control Algorithm for pre-stored VBR video streams

    CERN Document Server

    Tryfonas, Christos; Mehler, Andrew; Skiena, Steven

    2008-01-01

    We examine the problem of accepting a new request for a pre-stored VBR video stream that has been smoothed using any of the smoothing algorithms found in the literature. The output of these algorithms is a piecewise constant-rate schedule for a Variable Bit-Rate (VBR) stream. The schedule guarantees that the decoder buffer does not overflow or underflow. The problem addressed in this paper is the determination of the minimal time displacement of each new requested VBR stream so that it can be accomodated by the network and/or the video server without overbooking the committed traffic. We prove that this call-admission control problem for multiple requested VBR streams is NP-complete and inapproximable within a constant factor, by reducing it from the VERTEX COLOR problem. We also present a deterministic morphology-sensitive algorithm that calculates the minimal time displacement of a VBR stream request. The complexity of the proposed algorithm make it suitable for real-time determination of the time displacem...

  17. Video-feedback intervention increases sensitive parenting in ethnic minority mothers: a randomized control trial.

    Science.gov (United States)

    Yagmur, Sengul; Mesman, Judi; Malda, Maike; Bakermans-Kranenburg, Marian J; Ekmekci, Hatice

    2014-01-01

    Using a randomized control trial design we tested the effectiveness of a culturally sensitive adaptation of the Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) in a sample of 76 Turkish minority families in the Netherlands. The VIPP-SD was adapted based on a pilot with feedback of the target mothers, resulting in the VIPP-TM (VIPP-Turkish Minorities). The sample included families with 20-47-month-old children with high levels of externalizing problems. Maternal sensitivity, nonintrusiveness, and discipline strategies were observed during pretest and posttest home visits. The VIPP-TM was effective in increasing maternal sensitivity and nonintrusiveness, but not in enhancing discipline strategies. Applying newly learned sensitivity skills in discipline situations may take more time, especially in a cultural context that favors more authoritarian strategies. We conclude that the VIPP-SD program and its video-feedback approach can be successfully applied in immigrant families with a non-Western cultural background, with demonstrated effects on parenting sensitivity and nonintrusiveness.

  18. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  19. A Randomized Controlled Trial of a CPR Decision Support Video for Patients Admitted to the General Medicine Service.

    Science.gov (United States)

    Merino, Aimee M; Greiner, Ryan; Hartwig, Kristopher

    2017-09-01

    Patient preferences regarding cardiopulmonary resuscitation (CPR) are important, especially during hospitalization when a patient's health is changing. Yet many patients are not adequately informed or involved in the decision-making process. We examined the effect of an informational video about CPR on hospitalized patients' code status choices. This was a prospective, randomized trial conducted at the Minneapolis Veterans Affairs Health Care System in Minnesota. We enrolled 119 patients, hospitalized on the general medicine service, and at least 65 years old. The majority were men (97%) with a mean age of 75. A video described code status choices: full code (CPR and intubation if required), do not resuscitate (DNR), and do not resuscitate/do not intubate (DNR/DNI). Participants were randomized to watch the video (n = 59) or usual care (n = 60). The primary outcome was participants' code status preferences. Secondary outcomes included a questionnaire designed to evaluate participants' trust in their healthcare team and knowledge and perceptions about CPR. Participants who viewed the video were less likely to choose full code (37%) compared to participants in the usual care group (71%) and more likely to choose DNR/DNI (56% in the video group vs. 17% in the control group) ( < 0.00001). We did not see a difference in trust in their healthcare team or knowledge and perceptions about CPR as assessed by our questionnaire. Hospitalized patients who watched a video about CPR and code status choices were less likely to choose full code and more likely to choose DNR/DNI.

  20. Reactionless camera inspection with a free-flying space robot under reaction null-space motion control

    Science.gov (United States)

    Sone, Hiroki; Nenchev, Dragomir

    2016-11-01

    The possibility of implementing reactionless motion control w.r.t. base orientation of a free-flying space robot in practical tasks is addressed. It is shown that such possibility depends strongly on the kinematic/dynamic design parameters as well as on the mission task. A successful implementation of a camera inspection task is reported. The presence of kinematic redundancy and the manipulator attachment position are shown to play important roles. More specifically, for a manipulator arm with a typical seven degree-of-freedom (DoF) kinematic structure, it is shown that two motion patterns, wrist reorientation and folding/unfolding of the arm, result in almost reactionless motion. The orientation pattern is adopted as the main task for camera inspection, while the remaining four DoFs are used to ensure complete reactionless motion and to minimize the position errors. Since the composition of these tasks introduces the so-called algorithmic singularities, two methods are suggested to alleviate the problem. Furthermore, it is shown that other types of singularities may also be introduced in case of an inappropriate choice of the manipulator attachment position. At the end, numerical analysis is performed to show that reactionless motion provides an advantage in terms of kinetic energy as well.

  1. Digital Video Teach Yourself VISUALLY

    CERN Document Server

    Watson, Lonzell

    2010-01-01

    Tips and techniques for shooting and sharing superb digital videos. Never before has video been more popular-or more accessible to the home photographer. Now you can create YouTube-worthy, professional-looking video, with the help of this richly illustrated guide. In a straightforward, simple, highly visual format, Teach Yourself VISUALLY Digital Video demystifies the secrets of great video. With colorful screenshots and illustrations plus step-by-step instructions, the book explains the features of your camera and their capabilities, and shows you how to go beyond "auto" to manually

  2. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  3. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  4. Visual Fixation for 3D Video Stabilization

    Directory of Open Access Journals (Sweden)

    Hans-Peter Seidel

    2011-03-01

    Full Text Available Visual fixation is employed by humans and some animals to keep a specific 3D location at the center of the visual gaze. Inspired by this phenomenon in nature, this paper explores the idea to transfer this mechanism to the context of video stabilization for a hand-held video camera. A novel approach is presented that stabilizes a video by fixating on automatically extracted 3D target points. This approach is different from existing automatic solutions that stabilize the video by smoothing. To determine the 3D target points, the recorded scene is analyzed with a state-of-the-art structure-from-motion algorithm, which estimates camera motion and reconstructs a 3D point cloud of the static scene objects. Special algorithms are presented that search either virtual or real 3D target points, which back-project close to the center of the image for as long a period of time as possible. The stabilization algorithm then transforms the original images of the sequence so that these 3D target points are kept exactly in the center of the image, which, in case of real 3D target points, produces a perfectly stable result at the image center. Furthermore, different methods of additional user interaction are investigated. It is shown that the stabilization process can easily be controlled and that it can be combined with state-of-the-art tracking techniques in order to obtain a powerful image stabilization tool. The approach is evaluated on a variety of videos taken with a hand-held camera in natural scenes.

  5. Remote-controlled pan, tilt, zoom cameras at Kilauea and Mauna Loa Volcanoes, Hawai'i

    Science.gov (United States)

    Hoblitt, Richard P.; Orr, Tim R.; Castella, Frederic; Cervelli, Peter F.

    2008-01-01

    Lists of important volcano-monitoring disciplines usually include seismology, geodesy, and gas geochemistry. Visual monitoring - the essence of volcanology - is usually not mentioned. Yet, observations of the outward appearance of a volcano provide data that is equally as important as that provided by the other disciplines. The eye was almost certainly the first volcano monitoring-tool used by early man. Early volcanology was mostly descriptive and was based on careful visual observations of volcanoes. There is still no substitute for the eye of an experienced volcanologist. Today, scientific instruments replace or augment our senses as monitoring tools because instruments are faster and more sensitive, work tirelessly day and night, keep better records, operate in hazardous environments, do not generate lawsuits when damaged or destroyed, and in most cases are cheaper. Furthermore, instruments are capable of detecting phenomena that are outside the reach of our senses. The human eye is now augmented by the camera. Sequences of timed images provide a record of visual phenomena that occur on and above the surface of volcanoes. Photographic monitoring is a fundamental monitoring tool; image sequences can often provide the basis for interpreting other data streams. Monitoring data are most useful when they are generated and are available for analysis in real-time or near real-time. This report describes the current (as of 2006) system for real-time photograph acquisition and transmission from remote sites on Kilauea and Mauna Loa volcanoes to the U.S. Geological Survey Hawaiian Volcano Observatory (HVO). It also describes how the photographs are archived and analyzed. In addition to providing system documentation for HVO, we hope that the report will prove useful as a practical guide to the construction of a high-bandwidth network for the telemetry of real-time data from remote locations.

  6. A novel dynamic frame rate control algorithm for H.264 low-bit-rate video coding

    Institute of Scientific and Technical Information of China (English)

    Yang Jing; Fang Xiangzhong

    2007-01-01

    The goal of this paper is to improve human visual perceptual quality as well as coding efficiency of H.264 video at low bit rate conditions by adaptively adjusting the number of skipped frames. The encoding frames ale selected according to the motion activity of each frame and the motion accumulation of successive frames. The motion activity analysis is based on the statistics of motion vectors and with consideration of the characteristics of H. 264 coding standard. A prediction model of motion accumulation is proposed to reduce complex computation of motion estimation. The dynamic encoding frame rate control algorithm is applied to both the frame level and the GOB (Group of Macroblocks) level. Simulation is done to compare the performance of JM76 with the proposed frame level scheme and GOB level scheme.

  7. Scalable video object coding & QoS control for next generation space internet

    Institute of Scientific and Technical Information of China (English)

    TU GuoFang; ZHANG Can; HEINRICH Nimann; XU Jie; WU WeiRen

    2008-01-01

    The next generation space internet (NGSI) is based on all-IP-based mobile network that merges land-based network, sea-based network, sky-based network, space-based network, deep space-based network together using existing assess network technologies. There are high signal propagation delays, high error rate, bandwidth variation and time-variety in NGSI. In order to adapt to various space communica-tion environment constraints and bandwidth variation, we propose a reduced di-mension scalable video coding scheme based on CCSDS IDCS algorithm and quality of service (QoS) control method by cross layer design (CLD). The experi-mental result shows that this new method has better performance than that of ex-isting algorithms, and can be adaptive to the bandwidth variation dynamically.

  8. Understanding Computer-Based Digital Video.

    Science.gov (United States)

    Martindale, Trey

    2002-01-01

    Discussion of new educational media and technology focuses on producing and delivering computer-based digital video. Highlights include video standards, including international standards and aspect ratio; camera formats and features, including costs; shooting digital video; editing software; compression; and a list of informative Web sites. (LRW)

  9. Assessment of Active Video Gaming Using Adapted Controllers by Individuals With Physical Disabilities: A Protocol.

    Science.gov (United States)

    Malone, Laurie A; Padalabalanarayanan, Sangeetha; McCroskey, Justin; Thirumalai, Mohanraj

    2017-06-16

    Individuals with disabilities are typically more sedentary and less fit compared to their peers without disabilities. Furthermore, engaging in physical activity can be extremely challenging due to physical impairments associated with disability and fewer opportunities to participate. One option for increasing physical activity is playing active video games (AVG), a category of video games that requires much more body movement for successful play than conventional push-button or joystick actions. However, many current AVGs are inaccessible or offer limited play options for individuals who are unable to stand, have balance issues, poor motor control, or cannot use their lower body to perform game activities. Making AVGs accessible to people with disabilities offers an innovative approach to overcoming various barriers to participation in physical activity. Our aim was to compare the effect of off-the-shelf and adapted game controllers on quality of game play, enjoyment, and energy expenditure during active video gaming in persons with physical disabilities, specifically those with mobility impairments (ie, unable to stand, balance issues, poor motor control, unable to use lower extremity for gameplay). The gaming controllers to be evaluated include off-the-shelf and adapted versions of the Wii Fit balance board and gaming mat. Participants (10-60 years old) came to the laboratory a total of three times. During the first visit, participants completed a functional assessment and became familiar with the equipment and games to be played. For the functional assessment, participants performed 18 functional movement tasks from the International Classification of Functioning, Disability, and Health. They also answered a series of questions from the Patient Reported Outcomes Measurement Information System and Quality of Life in Neurological Conditions measurement tools, to provide a personal perspective regarding their own functional ability. For Visit 2, metabolic data were

  10. Video compressive sensing using Gaussian mixture models.

    Science.gov (United States)

    Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2014-11-01

    A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.

  11. Electronic cameras for low-light microscopy.

    Science.gov (United States)

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels.

  12. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  13. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  14. An intelligent automated door control system based on a smart camera

    National Research Council Canada - National Science Library

    Yang, Jie-Ci; Lai, Chin-Lun; Sheu, Hsin-Teng; Chen, Jiann-Jone

    2013-01-01

    This paper presents an innovative access control system, based on human detection and path analysis, to reduce false automatic door system actions while increasing the added values for security applications...

  15. A controlled pilot trial of two commercial video games for rehabilitation of arm function after stroke.

    Science.gov (United States)

    Chen, Mei-Hsiang; Huang, Lan-Ling; Lee, Chang-Franw; Hsieh, Ching-Lin; Lin, Yu-Chao; Liu, Hsiuchih; Chen, Ming-I; Lu, Wen-Shian

    2015-07-01

    To investigate the acceptability and potential efficacy of two commercial video games for improving upper extremity function after stroke in order to inform future sample size and study design. A controlled clinical trial design using sequential allocation into groups. A clinical occupational therapy department. Twenty-four first-stroke patients. Patients were assigned to one of three groups: conventional group, Wii group, and XaviX group. In addition to regular one-hour conventional rehabilitation, each group received an additional half-hour of upper extremity exercises via conventional devices, Wii games, or XaviX games, for eight weeks. The Fugl-Meyer Assessment of motor function, Box and Block Test of Manual Dexterity, Functional Independence Measure, and upper extremity range of motion were used at baseline and postintervention. Also, a questionnaire was used to assess motivation and enjoyment. The effect size of differences in change scores between the Wii and conventional groups ranged from 0.71 (SD 0.59) to 0.28 (SD 0.58), on the Fugl-Meyer Assessment of motor function (d = 0.74) was larger than that between the XaviX and conventional groups, ranged from 0.44 (SD 0.49) to 0.28 (SD 0.58) (d = 0.30). Patient enjoyment was significantly greater in the video game groups (Wii mean 4.25, SD 0.89; XaviX mean 4.38, SD 0.52) than in the conventional group (mean 2.25, SD 0.89, F = 18.55, p rehabilitation. A sample size of 72 patients (24 per group) would be appropriate for a full study. © The Author(s) 2014.

  16. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  17. Video-Feedback Intervention to Promote Positive Parenting Adapted to Autism (VIPP-AUTI): A Randomized Controlled Trial

    Science.gov (United States)

    Poslawsky, Irina E; Naber, Fabiënne BA; Bakermans-Kranenburg, Marian J; van Daalen, Emma; van Engeland, Herman; van IJzendoorn, Marinus H

    2015-01-01

    In a randomized controlled trial, we evaluated the early intervention program Video-feedback Intervention to promote Positive Parenting adapted to Autism (VIPP-AUTI) with 78 primary caregivers and their child (16-61 months) with Autism Spectrum Disorder. VIPP-AUTI is a brief attachment-based intervention program, focusing on improving parent-child…

  18. Online camera-gyroscope autocalibration for cell phones.

    Science.gov (United States)

    Jia, Chao; Evans, Brian L

    2014-12-01

    The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values.

  19. Adaptive compressive sensing camera

    Science.gov (United States)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  20. Feedback from video for virtual reality Navigation

    Energy Technology Data Exchange (ETDEWEB)

    Tsap, L V

    2000-10-27

    Important preconditions for wide acceptance of virtual reality (VR) systems include their comfort, ease and naturalness to use. Most existing trackers super from discomfort-related issues. For example, body-based trackers (hand controllers, joysticks, helmet attachments, etc.) restrict spontaneity and naturalness of motion, while ground-based devices (e.g., hand controllers) limit the workspace by literally binding an operator to the ground. There are similar problems with controls. This paper describes using real-time video with registered depth information (from a commercially available camera) for virtual reality navigation. Camera-based setup can replace cumbersome trackers. The method includes selective depth processing for increased speed, and a robust skin-color segmentation for accounting illumination variations.

  1. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  2. PSD Camera Based Position and Posture Control of Redundant Robot Considering Contact Motion

    Science.gov (United States)

    Oda, Naoki; Kotani, Kentaro

    The paper describes a position and posture controller design based on the absolute position by external PSD vision sensor for redundant robot manipulator. The redundancy enables a potential capability to avoid obstacle while continuing given end-effector jobs under contact with middle link of manipulator. Under contact motion, the deformation due to joint torsion obtained by comparing internal and external position sensor, is actively suppressed by internal/external position hybrid controller. The selection matrix of hybrid loop is given by the function of the deformation. And the detected deformation is also utilized in the compliant motion controller for passive obstacle avoidance. The validity of the proposed method is verified by several experimental results of 3link planar redundant manipulator.

  3. Teaching parents about responsive feeding through a vicarious learning video: A pilot randomized controlled trial

    Science.gov (United States)

    The American Academy of Pediatrics and World Health Organization recommend responsive feeding (RF) to promote healthy eating behaviors in early childhood. This project developed and tested a vicarious learning video to teach parents RF practices. A RF vicarious learning video was developed using com...

  4. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Directory of Open Access Journals (Sweden)

    Parisot Christophe

    2003-01-01

    Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  5. Modeling the coupling effect of jitter and attitude control on TDICCD camera imaging

    Science.gov (United States)

    Li, Yulun; Yang, Zhen; Ma, Xiaoshan; Ni, Wei

    2016-10-01

    The vibration has an important influence on space-borne TDICCD imaging quality. It is generally aroused by an interaction between satellite jitter and attitude control. Previous modeling for this coupling relation is mainly concentrating on accurate modal analysis, transfer path and damping design, etc. Nevertheless, when controlling attitude, the coupling terms among three body axes are usually ignored. This is what we try to study in this manuscript. Firstly, a simplified formulation dedicated to this problem is established. Secondly, we use Dymola 2016 to execute the simulation model profiting Modelica synchronous feature, which has been proposed in recent years. The results demonstrate that the studied effect can introduce additional oscillatory modes and lead the attitude stabilization process slower. In addition, when fully stabilized, there seems time-statistically no difference but it still intensifies the motion-blur by a tiny amount. We state that this effect might be worth considering in image restoration.

  6. IOTA: the array controller for a gigapixel OTCCD camera for Pan-STARRS

    Science.gov (United States)

    Onaka, Peter; Tonry, John; Luppino, Gerard; Lockhart, Charles; Lee, Aaron; Ching, Gregory; Isani, Sidik; Uyeshiro, Robin

    2004-09-01

    The PanSTARRS project has undertaken an ambitious effort to develop a completely new array controller architecture that is fundamentally driven by the large 1gigapixel, low noise, high speed OTCCD mosaic requirements as well as the size, power and weight restrictions of the PanSTARRS telescope. The result is a very small form factor next generation controller scalar building block with 1 Gigabit Ethernet interfaces that will be assembled into a system that will readout 512 outputs at ~1 Megapixel sample rates on each output. The paper will also discuss critical technology and fabrication techniques such as greater than 1MHz analog to digital converters (ADCs), multiple fast sampling and digital calculation of multiple correlated samples (DMCS), ball grid array (BGA) packaged circuits, LINUX running on embedded field programmable gate arrays (FPGAs) with hard core microprocessors for the prototype currently being developed.

  7. Video-Assisted Thoracoscopic Sympathectomy for Palmar Hyperhidrosis: A Meta-Analysis of Randomized Controlled Trials.

    Directory of Open Access Journals (Sweden)

    Wenxiong Zhang

    Full Text Available Video-assisted thoracoscopic sympathectomy (VTS is effective in treating palmar hyperhidrosis (PH. However, it is no consensus over which segment should undergo VTS to maximize efficacy and minimize the complications of compensatory hyperhidrosis (CH. This study was designed to compare the efficiency and side effects of VTS of different segments in the treatment of PH.A comprehensive search of PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, the Cochrane Library, Scopus and Google Scholar was performed to identify studies comparing VTS of different segments for treatment of PH. The data was analyzed by Revman 5.3 software and SPSS 18.0.A total of eight randomized controlled trials (RCTs involving 1200 patients were included. Meta-analysis showed that single segment/low segments VTS could reduce the risk of moderate/severe CH compared with multiple segments/high segments. The risk of total CH had a similar trend. In the subgroup analysis of single segment VTS, no significant differences were found between T2/T3 VTS and other segments in postoperative CH and degree of CH. T4 VTS showed better efficacy in limiting CH compared with other segments.T4 appears to be the best segment for the surgical treatment of PH. Our findings require further validation in more high-quality, large-scale randomized controlled trials.

  8. Video-Assisted Thoracoscopic Sympathectomy for Palmar Hyperhidrosis: A Meta-Analysis of Randomized Controlled Trials

    Science.gov (United States)

    Zhang, Wenxiong; Yu, Dongliang; Jiang, Han; Xu, Jianjun; Wei, Yiping

    2016-01-01

    Objectives Video-assisted thoracoscopic sympathectomy (VTS) is effective in treating palmar hyperhidrosis (PH). However, it is no consensus over which segment should undergo VTS to maximize efficacy and minimize the complications of compensatory hyperhidrosis (CH). This study was designed to compare the efficiency and side effects of VTS of different segments in the treatment of PH. Methods A comprehensive search of PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, the Cochrane Library, Scopus and Google Scholar was performed to identify studies comparing VTS of different segments for treatment of PH. The data was analyzed by Revman 5.3 software and SPSS 18.0. Results A total of eight randomized controlled trials (RCTs) involving 1200 patients were included. Meta-analysis showed that single segment/low segments VTS could reduce the risk of moderate/severe CH compared with multiple segments/high segments. The risk of total CH had a similar trend. In the subgroup analysis of single segment VTS, no significant differences were found between T2/T3 VTS and other segments in postoperative CH and degree of CH. T4 VTS showed better efficacy in limiting CH compared with other segments. Conclusions T4 appears to be the best segment for the surgical treatment of PH. Our findings require further validation in more high-quality, large-scale randomized controlled trials. PMID:27187774

  9. Stationary Stereo-Video Camera Stations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Accurate and precise stock assessments are predicated on accurate and precise estimates of life history parameters, abundance, and catch across the range of the...

  10. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2010-01-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation

  11. Visual acuity, contrast sensitivity, and range performance with compressed motion video

    NARCIS (Netherlands)

    Bijl, P.; Vries, S.C. de

    2010-01-01

    Video of visual acuity (VA) and contrast sensitivity (CS) test charts in a complex background was recorded using a CCD color camera mounted on a computer-controlled tripod and was fed into real-time MPEG-2 compression/decompression equipment. The test charts were based on the triangle orientation di

  12. The effectiveness of video interaction guidance in parents of premature infants: A multicenter randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Tooten Anneke

    2012-06-01

    Full Text Available Abstract Background Studies have consistently found a high incidence of neonatal medical problems, premature births and low birth weights in abused and neglected children. One of the explanations proposed for the relation between neonatal problems and adverse parenting is a possible delay or disturbance in the bonding process between the parent and infant. This hypothesis suggests that due to neonatal problems, the development of an affectionate bond between the parent and the infant is impeded. The disruption of an optimal parent-infant bond -on its turn- may predispose to distorted parent-infant interactions and thus facilitate abusive or neglectful behaviours. Video Interaction Guidance (VIG is expected to promote the bond between parents and newborns and is expected to diminish non-optimal parenting behaviour. Methods/design This study is a multi-center randomised controlled trial to evaluate the effectiveness of Video Interaction Guidance in parents of premature infants. In this study 210 newborn infants with their parents will be included: n = 70 healthy term infants (>37 weeks GA, n = 70 moderate term infants (32–37 weeks GA which are recruited from maternity wards of 6 general hospitals and n = 70 extremely preterm infants or very low birth weight infants (i.e. full term infants and their parents, receiving care as usual, a control group (i.e. premature infants and their parents, receiving care as usual and an intervention group (i.e. premature infants and their parents, receiving VIG. The data will be collected during the first six months after birth using observations of parent-infant interactions, questionnaires and semi-structured interviews. Primary outcomes are the quality of parental bonding and parent-infant interactive behaviour. Parental secondary outcomes are (posttraumatic stress symptoms, depression, anxiety and feelings of anger and hostility. Infant secondary outcomes are behavioral aspects such as crying

  13. A Novel Camera Based Mobile Robot With Obstacle Avoidance And Fire Extinguish Control

    Directory of Open Access Journals (Sweden)

    S.Vivekanadan

    2016-04-01

    Full Text Available The project is based on mobile wireless robot technology that performs the dual operation as obstacle avoidance and fire extinguish. The sensors used here are ultrasonic sensor for obstacle avoidance and flame sensor to detect the fire. Here, the signals are received by an Arduino board for controlling the robot. The motor drives are used to drive the robot. A wireless monitoring system is used to display the present scenario.in detail It is a robot that autonomously detects and extinguish fire. It uses flame sensor for detection of fire, also ultrasonic sensor to detect obstacles and Arduino board for processing. The Fire extinguisher along with actuator is used to extinguishing the fire which is been detected. The robot continuously scans for fire. This scanning is performed by Flame sensors placed on the sides When a fire is detected, it moves in the direction of fire and stops in front of it and trigger the extinguisher to turn out the fire.In order to achieve the extinguish process a robot has arm with Electronic valve and a motor is used along with the body to change the angle of the arm. This arm and motor can be controlled by the Arduino. The power source for the robot comes from a battery.

  14. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  15. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  16. An optically remote powered subsea video monitoring system

    Science.gov (United States)

    Lau, Fat Kit; Stewart, Brian; McStay, Danny

    2012-06-01

    The drive for Ocean pollution prevention requires a significant increase in the extent and type of monitoring of subsea hydrocarbon production equipment. Sensors, instrumentation, control electronics, data logging and transmission units comprising such monitoring systems will all require to be powered. Conventionally electrical powering is supplied by standard subsea electrical cabling. The ability to visualise the assets being monitored and any changes or faults in the equipment is advantageous to an overall monitoring system. However the effective use of video cameras, particularly if the transmission of real time high resolution video is desired, requires a high data rate and low loss communication capability. This can be challenging for heavy and costly electrical cables over extended distances. For this reason optical fibre is often adopted as the communication channel. Using optical fibre cables for both communications and power delivery can also reduce the cost of cabling. In this paper we report a prototype optically remote powered subsea video monitoring system that provides an alternative approach to powering subsea video cameras. The source power is transmitted to the subsea module through optical fibre with an optical-to-electrical converter located in the module. To facilitate intelligent power management in the subsea module, a supercapacitor based intermediate energy storage is installed. Feasibility of the system will be demonstrated. This will include energy charging and camera operation times.

  17. Advanced real-time manipulation of video streams

    CERN Document Server

    Herling, Jan

    2014-01-01

    Diminished Reality is a new fascinating technology that removes real-world content from live video streams. This sensational live video manipulation actually removes real objects and generates a coherent video stream in real-time. Viewers cannot detect modified content. Existing approaches are restricted to moving objects and static or almost static cameras and do not allow real-time manipulation of video content. Jan Herling presents a new and innovative approach for real-time object removal with arbitrary camera movements.

  18. Image Space and Time Interpolation for Video Navigation

    OpenAIRE

    2011-01-01

    English: The aim of image-based video navigation is essentially to achieve a continuous change in the viewpoint without the need of a complete camera coverage of the space of interest. By making use of image interpolation, the need for video hardware can be reduced drastically by replacing them, in the desired viewpoints, with virtual video cameras. In this work, based on previously published approaches, an algorithm for time and space image interpolation is developed with a video application...

  19. Hazmat Cam Wireless Video System

    Energy Technology Data Exchange (ETDEWEB)

    Kevin L. Young

    2006-02-01

    This paper describes the Hazmat Cam Wireless Video System and its application to emergency response involving chemical, biological or radiological contamination. The Idaho National Laboratory designed the Hazmat Cam Wireless Video System to assist the National Guard Weapons of Mass Destruction - Civil Support Teams during their mission of emergency response to incidents involving weapons of mass destruction. The lightweight, handheld camera transmits encrypted, real-time video from inside a contaminated area, or hot-zone, to a command post located a safe distance away. The system includes a small wireless video camera, a true-diversity receiver, viewing console, and an optional extension link that allows the command post to be placed up to five miles from danger. It can be fully deployed by one person in a standalone configuration in less than 10 minutes. The complete system is battery powered. Each rechargeable camera battery powers the camera for 3 hours with the receiver and video monitor battery lasting 22 hours on a single charge. The camera transmits encrypted, low frequency analog video signals to a true-diversity receiver with three antennas. This unique combination of encryption and transmission technologies delivers encrypted, interference-free images to the command post under conditions where other wireless systems fail. The lightweight camera is completely waterproof for quick and easy decontamination after use. The Hazmat Cam Wireless Video System is currently being used by several National Guard Teams, the US Army, and by fire fighters. The system has been proven to greatly enhance situational awareness during the crucial, initial phase of a hazardous response allowing commanders to make better, faster, safer decisions.

  20. SMART VIDEO SURVEILLANCE SYSTEM FOR VEHICLE DETECTION AND TRAFFIC FLOW CONTROL

    Directory of Open Access Journals (Sweden)

    A. A. SHAFIE

    2011-08-01

    Full Text Available Traffic signal light can be optimized using vehicle flow statistics obtained by Smart Video Surveillance Software (SVSS. This research focuses on efficient traffic control system by detecting and counting the vehicle numbers at various times and locations. At present, one of the biggest problems in the main city in any country is the traffic jam during office hour and office break hour. Sometimes it can be seen that the traffic signal green light is still ON even though there is no vehicle coming. Similarly, it is also observed that long queues of vehicles are waiting even though the road is empty due to traffic signal light selection without proper investigation on vehicle flow. This can be handled by adjusting the vehicle passing time implementing by our developed SVSS. A number of experiment results of vehicle flows are discussed in this research graphically in order to test the feasibility of the developed system. Finally, adoptive background model is proposed in SVSS in order to successfully detect target objects such as motor bike, car, bus, etc.

  1. Using game theory for perceptual tuned rate control algorithm in video coding

    Science.gov (United States)

    Luo, Jiancong; Ahmad, Ishfaq

    2005-03-01

    This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

  2. Recommendation application for video head impulse test based on fuzzy logic control

    Institute of Scientific and Technical Information of China (English)

    NGUYEN Thi Anh Dao; KIM Dae Young; LEE Sang Min; KIM Kyu Sung; Seong Ro Lee; KWON Jang Woo

    2016-01-01

    Vestibulo-ocular reflex (VOR) is an important biological reflex that controls eye movement to ensure clear vision while the head is in motion. Nowadays, VOR measurement is commonly done with a video head impulse test based on a velocity gain algorithm or a position gain algorithm, in which velocity gain is a VOR calculation on head and eye velocity, whereas position gain is calculated from head and eye position. The aim of this work is first to compare the two algorithms’ performance and to detect covert catch-up saccade, then to propose a stand-alone recommendation application for the patient’s diagnosis. In the first experiment, for ipsilesional and contralesional sides, the calculated position gain (0.94±0.17) is higher than velocity gain (0.84±0.19). Moreover, gain asymmetry of both lesion and intact sides using velocity gain is mostly higher than that from using position gain (four out of five subjects). Consequently, for subjects who have unilateral vestibular neuritis diagnosed from clinical symptoms and a vestibular function test, vestibular weakness is depicted by velocity gain much better than by position gain. Covert catch-up saccade and position gain then are used as inputs for recommendation applications.

  3. Panoramic Stereoscopic Video System for Remote-Controlled Robotic Space Operations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this project, the development of a novel panoramic, stereoscopic video system was proposed. The proposed system, which contains no moving parts, uses three-fixed...

  4. A brief report on the relationship between self-control, video game addiction and academic achievement in normal and ADHD students.

    Science.gov (United States)

    Haghbin, Maryam; Shaterian, Fatemeh; Hosseinzadeh, Davood; Griffiths, Mark D

    2013-12-01

    Over the last two decades, research into video game addiction has grown increasingly. The present research aimed to examine the relationship between video game addiction, self-control, and academic achievement of normal and ADHD high school students. Based on previous research it was hypothesized that (i) there would be a relationship between video game addiction, self-control and academic achievement (ii) video game addiction, self-control and academic achievement would differ between male and female students, and (iii) the relationship between video game addiction, self-control and academic achievement would differ between normal students and ADHD students. The research population comprised first grade high school students of Khomeini-Shahr (a city in the central part of Iran). From this population, a sample group of 339 students participated in the study. The survey included the Game Addiction Scale (Lemmens, Valkenburg & Peter, 2009), the Self-Control Scale (Tangney, Baumeister & Boone, 2004) and the ADHD Diagnostic checklist (Kessler et al., 2007). In addition to questions relating to basic demographic information, students' Grade Point Average (GPA) for two terms was used for measuring their academic achievement. These hypotheses were examined using a regression analysis. Among Iranian students, the relationship between video game addiction, self-control, and academic achievement differed between male and female students. However, the relationship between video game addiction, self-control, academic achievement, and type of student was not statistically significant. Although the results cannot demonstrate a causal relationship between video game use, video game addiction, and academic achievement, they suggest that high involvement in playing video games leaves less time for engaging in academic work.

  5. Assessment of Active Video Gaming Using Adapted Controllers by Individuals With Physical Disabilities: A Protocol

    OpenAIRE

    Malone, Laurie A.; Padalabalanarayanan, Sangeetha; McCroskey, Justin; Thirumalai, Mohanraj

    2017-01-01

    Background Individuals with disabilities are typically more sedentary and less fit compared to their peers without disabilities. Furthermore, engaging in physical activity can be extremely challenging due to physical impairments associated with disability and fewer opportunities to participate. One option for increasing physical activity is playing active video games (AVG), a category of video games that requires much more body movement for successful play than conventional push-button or joy...

  6. Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems

    Science.gov (United States)

    Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.

    2011-01-01

    The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.

  7. Multi-Object Tracking Scheme with Pyroelectric Infrared Sensor and Video Camera Coordination%融合热释电红外传感器与视频监控器的多目标跟踪算法

    Institute of Scientific and Technical Information of China (English)

    李方敏; 姜娜; 熊迹; 张景源

    2014-01-01

    现有基于热释电红外传感器的多目标跟踪系统在目标之间距离较近或者轨迹相交的情况下存在着误差较大的缺点。针对此缺点,提出了一种新型的基于热释电红外传感器与视频监测器协同工作的多目标跟踪方案。该方案可以充分利用两种传感器的优势,弥补在目标跟踪中的不足。算法采用最小二乘法利用热释电信息进行定位,并通过从图像或热释电传感器信号的幅频特性中提取特征信息来校正联合概率数据关联算法的关联矩阵,有效避免了错误关联。实验表明,该方案在多目标交叉情况下跟踪误差仅为其它算法的八分之一到四分之一。%The error tends to be significant in many existing pyroelectric infrared sensor based multi-object tracking systems when the measured objects get close to each other or their trajectories have intersections .To solve this problem ,we proposed a mul-ti-object tracking scheme by having pyroelectric infrared sensors and video cameras work cooperatively .This scheme takes the ad-vantages of both kinds of sensors ,which help to improve the performance compared to those using any kind of such sensors .In the proposed scheme ,we first achieve coarse positioning using least square method with data collected by pyroelectric infrared sensors , and then we correct the incidence matrix in joint probabilistic data association with features extracted from the images or the fre -quency responses of pyroelectric sensors .The coarse positioning is further filtered by joint probabilistic data association algorithm to obtain the final fine result .Such a method prevents false association effectively .Experimental results show that the tracking error of the proposed scheme in multi-object crossover scenario reduces to a quarter ,even to one eighth of the errors that exist in the com-pared schemes .

  8. Synchronization of video recording and laser pulses including background light suppression

    Science.gov (United States)

    Kalshoven, Jr., James E. (Inventor); Tierney, Jr., Michael (Inventor); Dabney, Philip W. (Inventor)

    2004-01-01

    An apparatus for and a method of triggering a pulsed light source, in particular a laser light source, for predictable capture of the source by video equipment. A frame synchronization signal is derived from the video signal of a camera to trigger the laser and position the resulting laser light pulse in the appropriate field of the video frame and during the opening of the electronic shutter, if such shutter is included in the camera. Positioning of the laser pulse in the proper video field allows, after recording, for the viewing of the laser light image with a video monitor using the pause mode on a standard cassette-type VCR. This invention also allows for fine positioning of the laser pulse to fall within the electronic shutter opening. For cameras with externally controllable electronic shutters, the invention provides for background light suppression by increasing shutter speed during the frame in which the laser light image is captured. This results in the laser light appearing in one frame in which the background scene is suppressed with the laser light being uneffected, while in all other frames, the shutter speed is slower, allowing for the normal recording of the background scene. This invention also allows for arbitrary (manual or external) triggering of the laser with full video synchronization and background light suppression.

  9. Interventional video tomography

    Science.gov (United States)

    Truppe, Michael J.; Pongracz, Ferenc; Ploder, Oliver; Wagner, Arne; Ewers, Rolf

    1995-05-01

    Interventional Video Tomography (IVT) is a new imaging modality for Image Directed Surgery to visualize in real-time intraoperatively the spatial position of surgical instruments relative to the patient's anatomy. The video imaging detector is based on a special camera equipped with an optical viewing and lighting system and electronic 3D sensors. When combined with an endoscope it is used for examining the inside of cavities or hollow organs of the body from many different angles. The surface topography of objects is reconstructed from a sequence of monocular video or endoscopic images. To increase accuracy and speed of the reconstruction the relative movement between objects and endoscope is continuously tracked by electronic sensors. The IVT image sequence represents a 4D data set in stereotactic space and contains image, surface topography and motion data. In ENT surgery an IVT image sequence of the planned and so far accessible surgical path is acquired prior to surgery. To simulate the surgical procedure the cross sectional imaging data is superimposed with the digitally stored IVT image sequence. During surgery the video sequence component of the IVT simulation is substituted by the live video source. The IVT technology makes obsolete the use of 3D digitizing probes for the patient image coordinate transformation. The image fusion of medical imaging data with live video sources is the first practical use of augmented reality in medicine. During surgery a head-up display is used to overlay real-time reformatted cross sectional imaging data with the live video image.

  10. Ground Validation Drop Camera Transect Points - St. Thomas/ St. John USVI - 2011 (NCEI Accession 0131858)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video were collected between...

  11. Ground Validation Drop Camera Transect Points - St. Thomas/ St. John USVI - 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video were collected between...

  12. Assessing stimulus control and promoting generalization via video modeling when teaching social responses to children with autism.

    Science.gov (United States)

    Jones, JoAnna; Lerman, Dorothea C; Lechago, Sarah

    2014-01-01

    We taught social responses to young children with autism using an adult as the recipient of the social interaction and then assessed generalization of performance to adults and peers who had not participated in the training. Although the participants' performance was similar across adults, responding was less consistent with peers, and a subsequent probe suggested that the recipient of the social behavior (adults vs. peers) controlled responding. We then evaluated the effects of having participants observe a video of a peer engaged in the targeted social behavior with another peer who provided reinforcement for the social response. Results suggested that certain irrelevant stimuli (adult vs. peer recipient) were more likely to exert stimulus control over responding than others (setting, materials) and that video viewing was an efficient way to promote generalization to peers.

  13. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  14. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  15. The Impact of Short-Term Video Games on Performance among Children with Developmental Delays: A Randomized Controlled Trial

    OpenAIRE

    Ru-Lan Hsieh; Wen-Chung Lee; Jui-Hsiang Lin

    2016-01-01

    This prospective, randomized controlled study investigated the effects of short-term interactive video game playing among children with developmental delays participating in traditional rehabilitation treatment at a rehabilitation clinic. One hundred and one boys and 46 girls with a mean age of 5.8 years (range: 3 to 12 years) were enrolled in this study. All patients were confirmed to suffer from developmental delays, and were participating in traditional rehabilitation treatment. Children p...

  16. IndigoVision IP video keeps watch over remote gas facilities in Amazon rainforest

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2010-07-15

    In Brazil, IndigoVision's complete IP video security technology is being used to remotely monitor automated gas facilities in the Amazon rainforest. Twelve compounds containing millions of dollars of process automation, telemetry, and telecom equipment are spread across many thousands of miles of forest and centrally monitored in Rio de Janeiro using Control Center, the company's Security Management software. The security surveillance project uses a hybrid IP network comprising satellite, fibre optic, and wireless links. In addition to advanced compression technology and bandwidth tuning tools, the IP video system uses Activity Controlled Framerate (ACF), which controls the frame rate of the camera video stream based on the amount of motion in a scene. In the absence of activity, the video is streamed at a minimum framerate, but the moment activity is detected the framerate jumps to the configured maximum. This significantly reduces the amount of bandwidth needed. At each remote facility, fixed analog cameras are connected to transmitter nodules that convert the feed to high-quality digital video for transmission over the IP network. The system also integrates alarms with video surveillance. PIR intruder detectors are connected to the system via digital inputs on the transmitters. Advanced alarm-handling features in the Control Center software process the PIR detector alarms and alert operators to potential intrusions. This improves operator efficiency and incident response. 1 fig.

  17. Video- or text-based e-learning when teaching clinical procedures? A randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Buch SV

    2014-08-01

    Full Text Available Steen Vigh Buch,1 Frederik Philip Treschow,2 Jesper Brink Svendsen,3 Bjarne Skjødt Worm4 1Department of Vascular Surgery, Rigshospitalet, Copenhagen, Denmark; 2Department of Anesthesia and Intensive Care, Herlev Hospital, Copenhagen, Denmark; 3Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark; 4Department of Anesthesia and Intensive Care, Bispebjerg Hospital, Copenhagen, Denmark Background and aims: This study investigated the effectiveness of two different levels of e-learning when teaching clinical skills to medical students. Materials and methods: Sixty medical students were included and randomized into two comparable groups. The groups were given either a video- or text/picture-based e-learning module and subsequently underwent both theoretical and practical examination. A follow-up test was performed 1 month later. Results: The students in the video group performed better than the illustrated text-based group in the practical examination, both in the primary test (P<0.001 and in the follow-up test (P<0.01. Regarding theoretical knowledge, no differences were found between the groups on the primary test, though the video group performed better on the follow-up test (P=0.04. Conclusion: Video-based e-learning is superior to illustrated text-based e-learning when teaching certain practical clinical skills. Keywords: e-learning, video versus text, medicine, clinical skills

  18. Stereoscopic camera design

    Science.gov (United States)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  19. Dissociation between "where" and "how" judgements of one's own motor performance in a video-controlled reaching task.

    Science.gov (United States)

    Boy, F; Palluel-Germain, R; Orliaguet, J-P; Coello, Y

    2005-09-23

    The aim of the present study is to show that the sensorimotor system makes a differential use of visual and internal (proprioception and efferent copy) signals when evaluating either the spatial or the dynamical components of our own motor response carried out under a remote visual feedback. Subjects were required to monitor target-directed pointings from the images furnished by a video camera overhanging the workspace. By rotating the camera, the orientation of the movement perceived on the screen was either changed by 45 degrees (visual bias) or maintained in conformity with the actual trajectory (0 degrees ). In either condition, after completing twenty pointings, participants had to evaluate their visuomotor performance in two non visual testing: They were both asked to reach the target in a single movement (evaluation of "how to reach the target"), and to evaluate the mapping of the spatial layout where they acted (evaluations of "where the starting position was and, what movement direction was"). Results revealed that though motor performance in the 45 degrees conditions was adapted to the visuomotor conflict, participants' evaluation of the spatial aspect of the performance was affected by the biased visual information. A different pattern was revealed for the evaluation of "how" the target was reached which was not affected by the visual bias. Thus, it is suggested that segregated processing of visual and kinesthetic information occurs depending upon the dimension of the performance that is judged. Visual information prevails when identifying the spatial context of a motor act whereas proprioception and/or efferent copy related signals are privileged when evaluating the dynamical component of the response.

  20. The Future of Video

    OpenAIRE

    Li, F.

    2016-01-01

    Executive Summary \\ud \\ud A range of technological innovations (e.g. smart phones and digital cameras), infrastructural advances (e.g. broadband and 3G/4G wireless networks) and platform developments (e.g. YouTube, Facebook, Snapchat, Instagram, Amazon, and Netflix) are collectively transforming the way video is produced, distributed, consumed, archived – and importantly, monetised. Changes have been observed well beyond the mainstream TV and film industries, and these changes are increasingl...

  1. The Future of Video

    OpenAIRE

    Li, F.

    2016-01-01

    Executive Summary \\ud \\ud A range of technological innovations (e.g. smart phones and digital cameras), infrastructural advances (e.g. broadband and 3G/4G wireless networks) and platform developments (e.g. YouTube, Facebook, Snapchat, Instagram, Amazon, and Netflix) are collectively transforming the way video is produced, distributed, consumed, archived – and importantly, monetised. Changes have been observed well beyond the mainstream TV and film industries, and these changes are increasingl...

  2. Geometric database maintenance using CCTV cameras and overlay graphics

    Science.gov (United States)

    Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin

    1988-01-01

    An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.

  3. Design Of A Multiformat Camera For Medical Fluoroscopy

    Science.gov (United States)

    Edmonds, Ernest W.; Hynes, David M.; Baranoski, Dennis; Rowlands, John; Krametz, Karl R.

    1981-07-01

    For a number of years multi-format or multi-image cameras have been used in radiology departments to record images for ultrasound, nuclear medicine and computerized tomography. The authors have described in previous papers the development of a total low dose fluoroscopy system, using a Siemens Videomed H 1023 line 25 MHz television system, a V.A.S. video disc recorder for pulsed fluoroscopy, and a modified Matrix Videoimager to record the spot film images. This approach has provided images of high quality with dose and cost reductions of the order of 90% for the total examination. The particular problems involved with the modification of a multi-image camera for fluoroscopic or radiographic procedures, can be minimised by appropriate choice of monitor phosphor and correct control of the exposure sequence.

  4. Virtual 3D camera' s modeling and roaming control%三维虚拟摄像机的建模及其漫游控制方法

    Institute of Scientific and Technical Information of China (English)

    闫志远; 吴冬梅; 鲍义东; 杜志江

    2013-01-01

    A monocular virtual camera model was described using the vector description method. Based on this, the parameters of a 3D binocular virtual camera and their constraining relations were determined according to the binocular vision theorem, and then a 3D virtual camera model which contains two monocular virtual cameras was proposed. According to the 3D virtual camera model,a roaming control method for binocular virtual cameras was proposed based on robot kinematics to solve problems of viewing angle limitations and 3D imaging distortion caused by the fact that 3D observation parameters are difficult to change in common virtual reality. The experimental results show that the 3D virtual camera model and the roaming control model can be effectively used to observe virtual reality environments and make the parameters adjusted in real time based upon observation demand, which means that the shift from passive virtual 3D to interactive virtual 3D is of great significance to the improvement of 3D observation in virtual reality.%利用矢量描述方法表达了虚拟现实(VR)中的虚拟单目摄像机模型,在此基础上,基于双目视差形成三维视觉的原理确定虚拟现实环境中双目三维摄像的控制参数及其约束关系,进而提出了包含两个单目虚拟摄像机的三维虚拟摄像机整体模型.针对该模型提出了一种基于机器人运动学方法的双目虚拟摄像机漫游控制方法,解决了虚拟现实三维观察参数难以实时改变而导致的视角局限和三维失真等问题.实验表明通过该三维虚拟摄像机可获取高质量三维效果,提出的漫游控制方法通过模型的自动运算和参数补偿可实时主动地获取任意方向的三维景深.这种由被动式虚拟三维向交互性虚拟三维的跨越对改进虚拟现实的沉浸感具有重要意义.

  5. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  6. The Development and Application of Advanced Video and Microcomputer- Based Command and Control (C2) Systems

    Science.gov (United States)

    1982-12-01

    video recorders; microcomputers; spatial data management; shared data microcomputer software design; 6502 Microprocessor; APPLE II M AUSTRAC -T (CaeUE...maintain a key role in supporting the continued development of the technology. In 1979, work began at the Computer Corporation of Amer- ica to

  7. Flow visualization by mobile phone cameras

    Science.gov (United States)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  8. Incremental activity modeling in multiple disjoint cameras.

    Science.gov (United States)

    Loy, Chen Change; Xiang, Tao; Gong, Shaogang

    2012-09-01

    Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as context-incoherent patterns through incremental learning of time delayed dependencies between distributed local activities observed within and across camera views. Specifically, we model multicamera activities using a Time Delayed Probabilistic Graphical Model (TD-PGM) with different nodes representing activities in different decomposed regions from different views and the directed links between nodes encoding their time delayed dependencies. To deal with visual context changes, we formulate a novel incremental learning method for modeling time delayed dependencies that change over time. We validate the effectiveness of the proposed approach using a synthetic data set and videos captured from a camera network installed at a busy underground station.

  9. TakeCARE, a Video Bystander Program to Help Prevent Sexual Violence on College Campuses: Results of Two Randomized, Controlled Trials.

    Science.gov (United States)

    Jouriles, Ernest N; McDonald, Renee; Rosenfield, David; Levy, Nicole; Sargent, Kelli; Caiozzo, Christina; Grych, John H

    2016-07-01

    The present research reports on two randomized controlled trials evaluating TakeCARE, a video bystander program designed to help prevent sexual violence on college campuses. In Study 1, students were recruited from psychology courses at two universities. In Study 2, first-year students were recruited from a required course at one university. In both studies, students were randomly assigned to view one of two videos: TakeCARE or a control video on study skills. Just before viewing the videos, students completed measures of bystander behavior toward friends and ratings of self-efficacy for performing such behaviors. The efficacy measure was administered again after the video, and both the bystander behavior measure and the efficacy measure were administered at either one (Study 1) or two (Study 2) months later. In both studies, students who viewed TakeCARE, compared to students who viewed the control video, reported engaging in more bystander behavior toward friends and greater feelings of efficacy for performing such behavior. In Study 1, feelings of efficacy mediated effects of TakeCARE on bystander behavior; this result did not emerge in Study 2. This research demonstrates that TakeCARE, a video bystander program, can positively influence bystander behavior toward friends. Given its potential to be easily distributed to an entire campus community, TakeCARE might be an effective addition to campus efforts to prevent sexual violence.

  10. High-performance data and video recorder with real-time lossless compression

    Science.gov (United States)

    Beckstead, Jeffrey A.; Aceto, Steven C.; Conerty, Michelle D.; Nordhauser, Steven

    1997-01-01

    Over the last decade, the video camera has become a common diagnostic/tool for many scientific, industrial and medical applications. The amount of data collected by video capture systems can be enormous. For example, standard NTSC video requires 5 MBytes/sec, with many groups wanting higher resolution either in bit-depth, spatial resolution and/or frame speed. Despite great advances in video capture systems developed for the mass media and teleconferencing markets, the smaller markets of scientific and industrial applications have been ignored. This is primarily due to their need to maintain the independent nature of each camera system and to maintain the high quality of the video data. Many of the commercial systems are capable of digitizing a single camera (B/W or color) or multiple synchronized B/W cameras using an RGB color video capture chip set. In addition, most manufacturers utilize lossy compression to reduce the bandwidth before storing the data to disk. To address the needs of the scientific community, a high- performance data and video recorder has been developed. This system utilizes field programmable gate arrays (FPGAs) to control the analog and digital signals and to perform real- time lossless compression on the incoming data streams. Due to the flexibility inherent in the system, it is able to be configured for a variety of camera resolutions, frame rates and compression algorithms. In addition, alternative general purpose data acquisition modules are also being incorporated into the design. The modular design of the video/data recorder allows the carrier components to be easily adapted to new bus technology as it becomes available or the data acquisition components to be tailored to a specific application. Details of the recorder architecture are presented along with examples applied to thermonuclear fusion experiments. A lossless compression ratio of 3:1 has been obtained on fusion plasma images, with further reductions expected, allowing the

  11. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academi......, and distribute publications online through new digital media platforms, including blogs, open-access research databases etc. It involves a critical (re)examination of our authorial voice as researchers....... video, or short video essays produced for the explicit purpose of communicating research processes, topics, and research-based knowledge (see the journal of academic videos: www.audiovisualthinking.org). Video is increasingly used in popular showcases for video online, such as YouTube and Vimeo, as well...... approaches involving researchers and researcher participants. The general challenges of designing academic video include negotiating copyrights, gaining mastery of video editing, and considering online distribution (e.g. maintaining some control). A progression toward researchers designing video also begs...

  12. The AAPM/RSNA physics tutorial for residents: fluoroscopy: optical coupling and the video system.

    Science.gov (United States)

    Van Lysel, M S

    2000-01-01

    In fluoroscopic/fluorographic systems, an image intensifier is optically coupled to recording cameras. The optical distributor is responsible for transmitting a focused image from the output phosphor of the image intensifier to the focal planes of the cameras. Each camera has an aperture, which is used to control the level of light reaching its focal plane. The aperture setting determines the patient x-ray exposure level and the image noise level. Increasing the x-ray exposure reduces image noise; reducing the x-ray exposure increases image noise. Fluoroscopic/fluorographic systems always include a video camera. The functions of the video system are to provide for multiple observers and to facilitate image recording. The camera head contains an image sensor, which converts the light image from the image intensifier into a voltage signal. The device used to generate the video signal is a pickup tube or a charge-coupled device sensor. The method used is raster scanning, of which there are two types: progressive and interlaced. The vertical resolution of the system is primarily determined by the number of scan lines; the horizontal resolution is primarily determined by the bandwidth. Frame rate reduction can be a powerful tool for exposure reduction.

  13. Camera! Action! Collaborate with Digital Moviemaking

    Science.gov (United States)

    Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.

    2007-01-01

    Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…

  14. Contact-free heart rate measurement using multiple video data

    Science.gov (United States)

    Hung, Pang-Chan; Lee, Kual-Zheng; Tsai, Luo-Wei

    2013-10-01

    In this paper, we propose a contact-free heart rate measurement method by analyzing sequential images of multiple video data. In the proposed method, skin-like pixels are firstly detected from multiple video data for extracting the color features. These color features are synchronized and analyzed by independent component analysis. A representative component is finally selected among these independent component candidates to measure the HR, which achieves under 2% deviation on average compared with a pulse oximeter in the controllable environment. The advantages of the proposed method include: 1) it uses low cost and high accessibility camera device; 2) it eases users' discomfort by utilizing contact-free measurement; and 3) it achieves the low error rate and the high stability by integrating multiple video data.

  15. Characterization of the Series 1000 Camera System

    Energy Technology Data Exchange (ETDEWEB)

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  16. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  17. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.;

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera...

  18. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  19. The PAU Camera

    Science.gov (United States)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  20. Brain training with non-action video games enhances aspects of cognition in older adults: a randomized controlled trial.

    Science.gov (United States)

    Ballesteros, Soledad; Prieto, Antonio; Mayas, Julia; Toril, Pilar; Pita, Carmen; Ponce de León, Laura; Reales, José M; Waterworth, John

    2014-01-01

    Age-related cognitive and brain declines can result in functional deterioration in many cognitive domains, dependency, and dementia. A major goal of aging research is to investigate methods that help to maintain brain health, cognition, independent living and wellbeing in older adults. This randomized controlled study investigated the effects of 20 1-h non-action video game training sessions with games selected from a commercially available package (Lumosity) on a series of age-declined cognitive functions and subjective wellbeing. Two groups of healthy older adults participated in the study, the experimental group who received the training and the control group who attended three meetings with the research team along the study. Groups were similar at baseline on demographics, vocabulary, global cognition, and depression status. All participants were assessed individually before and after the intervention, or a similar period of time, using neuropsychological tests and laboratory tasks to investigate possible transfer effects. The results showed significant improvements in the trained group, and no variation in the control group, in processing speed (choice reaction time), attention (reduction of distraction and increase of alertness), immediate and delayed visual recognition memory, as well as a trend to improve in Affection and Assertivity, two dimensions of the Wellbeing Scale. Visuospatial working memory (WM) and executive control (shifting strategy) did not improve. Overall, the current results support the idea that training healthy older adults with non-action video games will enhance some cognitive abilities but not others.

  1. EDICAM fast video diagnostic installation on the COMPASS tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Szappanos, A., E-mail: szappanos@rmki.kfki.h [KFKI-RMKI, EURATOM Association, PO Box 49, Budapest-114, H-1521 Budapest (Hungary); Berta, M. [Szechenyi Istvan University, EURATOM Association, Egyetem ter 1, 9026 Gyor (Hungary); Hron, M.; Panek, R.; Stoeckel, J. [Institute of Plasma Physics AS CR, Association EURATOM/IPP.CR, Za Slovankou 3, 182 00 Prague (Czech Republic); Tulipan, S.; Veres, G. [KFKI-RMKI, EURATOM Association, PO Box 49, Budapest-114, H-1521 Budapest (Hungary); Weinzettl, V. [Institute of Plasma Physics AS CR, Association EURATOM/IPP.CR, Za Slovankou 3, 182 00 Prague (Czech Republic); Zoletnik, S. [KFKI-RMKI, EURATOM Association, PO Box 49, Budapest-114, H-1521 Budapest (Hungary)

    2010-07-15

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed by the Hungarian Association and has been installed on the COMPASS tokamak in the Institute of Plasma Physics AS CR in Prague, during February 2009. The standalone system contains a data acquisition PC and a prototype sensor module of EDICAM. Appropriate optical system have been designed and adjusted for the local requirements, and a mechanical holder keeps the camera out of the magnetic field. The fast camera contains a monochrome CMOS sensor with advanced control features and spectral sensitivity in the visible range. A special web based control interface has been implemented using Java spring framework to provide the control features in a graphical user environment. Java native interface (JNI) is used to reach the driver functions and to collect the data stored by direct memory access (DMA). Using a built in real-time streaming server one can see the live video from the camera through any web browser in the intranet. The live video is distributed in a Motion Jpeg format using real-time streaming protocol (RTSP) and a Java applet have been written to show the movie on the client side. The control system contains basic image processing features and the 3D wireframe of the tokamak can be projected to the selected frames. A MatLab interface is also presented with advanced post processing and analysis features to make the raw data available for high level computing programs. In this contribution all the concepts of EDICAM control center and the functions of the distinct software modules are described.

  2. A computerized system for video analysis of the aortic valve.

    Science.gov (United States)

    Vesely, I; Menkis, A; Campbell, G

    1990-10-01

    A novel technique was developed to study the dynamic behavior of the porcine aortic valve in an isolated heart preparation. Under the control of a personal computer, a video frame grabber board continuously acquired and digitized images of the aortic valve, and an analog-to-digital (A/D) converter read four channels of physiological data (flow rate, aortic and ventricular pressure, and aortic root diameter). The valve was illuminated with a strobe light synchronized to fire at the field acquisition rate of the CCD video camera. Using the overlay bits in the video board, the measured parameters were super-imposed over the live video as graphical tracing, and the resultant composite images were recorded on-line to video tape. The overlaying of the valve images with the graphical tracings of acquired data enabled the data tracings to be precisely synchronized with the video images of the aortic valve. This technique enabled us to observe the relationship between aortic root expansion and valve function.

  3. Playing with the Camera - Creating with Each Other

    DEFF Research Database (Denmark)

    Vestergaard, Vitus

    2015-01-01

    , it is imperative to investigate how museum users in a group create videos and engage with each other and the exhibits. Based on research on young users creating videos in the Media Mixer, this article explores what happens during the creative process in front of a camera. Drawing upon theories of museology, media...

  4. Camera Augmented Mobile C-arm

    Science.gov (United States)

    Wang, Lejing; Weidert, Simon; Traub, Joerg; Heining, Sandro Michael; Riquarts, Christian; Euler, Ekkehard; Navab, Nassir

    The Camera Augmented Mobile C-arm (CamC) system that extends a regular mobile C-arm by a video camera provides an X-ray and video image overlay. Thanks to the mirror construction and one time calibration of the device, the acquired X-ray images are co-registered with the video images without any calibration or registration during the intervention. It is very important to quantify and qualify the system before its introduction into the OR. In this communication, we extended the previously performed overlay accuracy analysis of the CamC system by another clinically important parameter, the applied radiation dose for the patient. Since the mirror of the CamC system will absorb and scatter radiation, we introduce a method for estimating the correct applied dose by using an independent dose measurement device. The results show that the mirror absorbs and scatters 39% of X-ray radiation.

  5. Real-time video-image analysis

    Science.gov (United States)

    Eskenazi, R.; Rayfield, M. J.; Yakimovsky, Y.

    1979-01-01

    Digitizer and storage system allow rapid random access to video data by computer. RAPID (random-access picture digitizer) uses two commercially-available, charge-injection, solid-state TV cameras as sensors. It can continuously update its memory with each frame of video signal, or it can hold given frame in memory. In either mode, it generates composite video output signal representing digitized image in memory.

  6. Effects of video-feedback on the communication, clinical competence and motivational interviewing skills of practice nurses: a pre-test posttest control group study.

    Science.gov (United States)

    Noordman, Janneke; van der Weijden, Trudy; van Dulmen, Sandra

    2014-10-01

    To examine the effects of individual video-feedback on the generic communication skills, clinical competence (i.e. adherence to practice guidelines) and motivational interviewing skills of experienced practice nurses working in primary care. Continuing professional education may be necessary to refresh and reflect on the communication and motivational interviewing skills of experienced primary care practice nurses. A video-feedback method was designed to improve these skills. Pre-test/posttest control group design. Seventeen Dutch practice nurses and 325 patients participated between June 2010-June 2011. Nurse-patient consultations were videotaped at two moments (T0 and T1), with an interval of 3-6 months. The videotaped consultations were rated using two protocols: the Maastrichtse Anamnese en Advies Scorelijst met globale items (MAAS-global) and the Behaviour Change Counselling Index. Before the recordings, nurses were allocated to a control or video-feedback group. Nurses allocated to the video-feedback group received video-feedback between T0 and T1. Data were analysed using multilevel linear or logistic regression. Nurses who received video-feedback appeared to pay significantly more attention to patients' request for help, their physical examination and gave significantly more understandable information. With respect to motivational interviewing, nurses who received video-feedback appeared to pay more attention to 'agenda setting and permission seeking' during their consultations. Video-feedback is a potentially effective method to improve practice nurses' generic communication skills. Although a single video-feedback session does not seem sufficient to increase all motivational interviewing skills, significant improvement in some specific skills was found. Nurses' clinical competences were not altered after feedback due to already high standards. © 2014 John Wiley & Sons Ltd.

  7. Integrating Scene Parallelism in Camera Auto-Calibration

    Institute of Scientific and Technical Information of China (English)

    LIU Yong (刘勇); WU ChengKe (吴成柯); Hung-Tat Tsui

    2003-01-01

    This paper presents an approach for camera auto-calibration from uncalibrated video sequences taken by a hand-held camera. The novelty of this approach lies in that the line parallelism is transformed to the constraints on the absolute quadric during camera autocalibration. This makes some critical cases solvable and the reconstruction more Euclidean. The approach is implemented and validated using simulated data and real image data. The experimental results show the effectiveness of the approach.

  8. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  9. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  10. Rice Crop Field Monitoring System with Radio Controlled Helicopter Based Near Infrared Cameras Through Nitrogen Content Estimation and Its Distribution Monitoring

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-03-01

    Full Text Available Rice crop field monitoring system with radio controlled helicopter based near infrared cameras is proposed together with nitrogen content estimation method for monitoring its distribution in the field in concern. Through experiments at the Saga Prefectural Agricultural Research Institute: SPARI, it is found that the proposed system works well for monitoring nitrogen content in the rice crop which indicates quality of the rice crop and its distribution in the field in concern. Therefore, it becomes available to maintain the rice crop fields in terms of quality control.

  11. Plasticity of attentional functions in older adults after non-action video game training: a randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Julia Mayas

    Full Text Available A major goal of recent research in aging has been to examine cognitive plasticity in older adults and its capacity to counteract cognitive decline. The aim of the present study was to investigate whether older adults could benefit from brain training with video games in a cross-modal oddball task designed to assess distraction and alertness. Twenty-seven healthy older adults participated in the study (15 in the experimental group, 12 in the control group. The experimental group received 20 1-hr video game training sessions using a commercially available brain-training package (Lumosity involving problem solving, mental calculation, working memory and attention tasks. The control group did not practice this package and, instead, attended meetings with the other members of the study several times along the course of the study. Both groups were evaluated before and after the intervention using a cross-modal oddball task measuring alertness and distraction. The results showed a significant reduction of distraction and an increase of alertness in the experimental group and no variation in the control group. These results suggest neurocognitive plasticity in the old human brain as training enhanced cognitive performance on attentional functions.ClinicalTrials.gov NCT02007616.

  12. 基于单目视觉的移动机器人伺服镇定控制%Monocular camera-based mobile robot visual servo regulation control

    Institute of Scientific and Technical Information of China (English)

    刘阳; 王忠立; 蔡伯根; 闻映红

    2016-01-01

    To solve the monocular camera‐based mobile robot regulation problem ,the kinematic model in camera coordinate was proposed under the condition of unknown range information ,unknown translation parameter between robot and camera frames ,camera with certain dip angle .A robust and adaptive controller was proposed based on the assumptions above .The controller guaranteed exponen‐tial convergence of the system .The performance of the controller was validated by simulation and ex‐periment result ,showing that the controller could guarantees the robot rapidly and smoothly regulate to desired pose .T he controller is also robust to unknow n parameter .%针对轮式移动机器人的单目视觉伺服镇定问题,在深度信息、机器人坐标系与摄像机坐标系间平移参量未知、摄像头光轴具有固定倾角的情况下,建立了移动机器人在摄像机坐标系下的运动模型。针对该模型提出了一种基于平面单应矩阵分解的鲁棒自适应控制方法,保证了误差的全局指数收敛。仿真和实验结果表明:所设计的控制器可以保证移动机器人指数收敛到期望的位姿,同时所设计的鲁棒自适应控制器对参数不确定性具有一定的鲁棒性。

  13. Combustion pinhole camera system

    Science.gov (United States)

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  14. Video essay

    DEFF Research Database (Denmark)

    2015-01-01

    Camera movement has a profound influence on the way films look and the way films are experienced by spectators. In this visual essay Jakob Isak Nielsen proposes six major functions of camera movement in narrative cinema. Individual camera movements may serve more of these functions at the same time...

  15. Spotlight on Authentic Learning: Student Developed Digital Video Projects

    Science.gov (United States)

    Kearney, Matthew; Schuck, Sandy

    2006-01-01

    The recent convergence of video and computer technologies presents new opportunities and challenges in education. Video production resources such as cameras and video editing software are now widely available in many schools and homes. The ease of use of these resources has encouraged teachers to use them across the curriculum with students of all…

  16. An open-source, FireWire camera-based, Labview-controlled image acquisition system for automated, dynamic pupillometry and blink detection.

    Science.gov (United States)

    de Souza, John Kennedy Schettino; Pinto, Marcos Antonio da Silva; Vieira, Pedro Gabrielle; Baron, Jerome; Tierra-Criollo, Carlos Julio

    2013-12-01

    The dynamic, accurate measurement of pupil size is extremely valuable for studying a large number of neuronal functions and dysfunctions. Despite tremendous and well-documented progress in image processing techniques for estimating pupil parameters, comparatively little work has been reported on practical hardware issues involved in designing image acquisition systems for pupil analysis. Here, we describe and validate the basic features of such a system which is based on a relatively compact, off-the-shelf, low-cost FireWire digital camera. We successfully implemented two configurable modes of video record: a continuous mode and an event-triggered mode. The interoperability of the whole system is guaranteed by a set of modular software components hosted on a personal computer and written in Labview. An offline analysis suite of image processing algorithms for automatically estimating pupillary and eyelid parameters were assessed using data obtained in human subjects. Our benchmark results show that such measurements can be done in a temporally precise way at a sampling frequency of up to 120 Hz and with an estimated maximum spatial resolution of 0.03 mm. Our software is made available free of charge to the scientific community, allowing end users to either use the software as is or modify it to suit their own needs.

  17. Solid State Replacement of Rotating Mirror Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Frank, A M; Bartolick, J M

    2006-08-25

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed ''In-situ Storage Image Sensor'' or ''ISIS'', by Prof. Goji Etoh, has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  18. A Randomized Controlled Trial to Test the Effectiveness of an Immersive 3D Video Game for Anxiety Prevention among Adolescents.

    Directory of Open Access Journals (Sweden)

    Hanneke Scholten

    Full Text Available Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138 was conducted to test the effectiveness of a biofeedback video game (Dojo for adolescents with elevated levels of anxiety. Adolescents (11-15 years old were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape. Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the "at-risk" cut-off on the Spence Children Anxiety Survey were eligible. Adolescents' anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents' anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants' expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues.

  19. A Randomized Controlled Trial to Test the Effectiveness of an Immersive 3D Video Game for Anxiety Prevention among Adolescents

    Science.gov (United States)

    Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C. M. E.; Granic, Isabela

    2016-01-01

    Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11–15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the “at-risk” cut-off on the Spence Children Anxiety Survey were eligible. Adolescents’ anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents’ anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants’ expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues. PMID:26816292

  20. A Randomized Controlled Trial to Test the Effectiveness of an Immersive 3D Video Game for Anxiety Prevention among Adolescents.

    Science.gov (United States)

    Scholten, Hanneke; Malmberg, Monique; Lobel, Adam; Engels, Rutger C M E; Granic, Isabela

    2016-01-01

    Adolescent anxiety is debilitating, the most frequently diagnosed adolescent mental health problem, and leads to substantial long-term problems. A randomized controlled trial (n = 138) was conducted to test the effectiveness of a biofeedback video game (Dojo) for adolescents with elevated levels of anxiety. Adolescents (11-15 years old) were randomly assigned to play Dojo or a control game (Rayman 2: The Great Escape). Initial screening for anxiety was done on 1,347 adolescents in five high schools; only adolescents who scored above the "at-risk" cut-off on the Spence Children Anxiety Survey were eligible. Adolescents' anxiety levels were assessed at pre-test, post-test, and at three month follow-up to examine the extent to which playing Dojo decreased adolescents' anxiety. The present study revealed equal improvements in anxiety symptoms in both conditions at follow-up and no differences between Dojo and the closely matched control game condition. Latent growth curve models did reveal a steeper decrease of personalized anxiety symptoms (not of total anxiety symptoms) in the Dojo condition compared to the control condition. Moderation analyses did not show any differences in outcomes between boys and girls nor did age differentiate outcomes. The present results are of importance for prevention science, as this was the first full-scale randomized controlled trial testing indicated prevention effects of a video game aimed at reducing anxiety. Future research should carefully consider the choice of control condition and outcome measurements, address the potentially high impact of participants' expectations, and take critical design issues into consideration, such as individual- versus group-based intervention and contamination issues.

  1. Video Analytics for Business Intelligence

    CERN Document Server

    Porikli, Fatih; Xiang, Tao; Gong, Shaogang

    2012-01-01

    Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the sho...

  2. Low-Complexity Error-Control Methods for Scalable Video Streaming

    Institute of Scientific and Technical Information of China (English)

    Zhijie Zhao; Jom Ostermann

    2012-01-01

    In this paper, low-complexity error-resilience and error-concealment methods for the scalable video coding (SVC) extension of H.264/AVC are described. At the encoder, multiple-description coding (MDC) is used as error-resilient coding. Balanced scalable multiple descriptions are generated by mixing the pre-encoded scalable bit streams. Each description is wholly decodable using a standard SVC decoder. A preprocessor can be placed before an SVC decoder to extract the packets from the highest-quality bit stream. At the decoder, error concealment involves using a lightweight decoder preprocessor to generate a valid bit stream from the available network abstraction layer (NAL) units when medium-grain scalability (MGS) layers are used. Modifications are made to the NAL unit header or slice header if some NAL units of MGS layers are lost. The number of additional packets that a decoder discards as a result of a packet loss is minimized. The proposed error-resilience and error-concealment methods require little computation, which makes them suitable for real-time video streaming. Experiment results show that the proposed methods significantly reduce quality degradation caused by packet loss.

  3. Streaming weekly soap opera video episodes to smartphones in a randomized controlled trial to reduce HIV risk in young urban African American/black women.

    Science.gov (United States)

    Jones, Rachel; Lacroix, Lorraine J

    2012-07-01

    Love, Sex, and Choices is a 12-episode soap opera video series created as an intervention to reduce HIV sex risk. The effect on women's HIV risk behavior was evaluated in a randomized controlled trial in 238 high risk, predominately African American young adult women in the urban Northeast. To facilitate on-demand access and privacy, the episodes were streamed to study-provided smartphones. Here, we discuss the development of a mobile platform to deliver the 12-weekly video episodes or weekly HIV risk reduction written messages to smartphones, including; the technical requirements, development, and evaluation. Popularity of the smartphone and use of the Internet for multimedia offer a new channel to address health disparities in traditionally underserved populations. This is the first study to report on streaming a serialized video-based intervention to a smartphone. The approach described here may provide useful insights in assessing advantages and disadvantages of smartphones to implement a video-based intervention.

  4. Repurposing video recordings for structure motion estimations

    Science.gov (United States)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  5. The Impact of Short-Term Video Games on Performance among Children with Developmental Delays: A Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Ru-Lan Hsieh

    Full Text Available This prospective, randomized controlled study investigated the effects of short-term interactive video game playing among children with developmental delays participating in traditional rehabilitation treatment at a rehabilitation clinic. One hundred and one boys and 46 girls with a mean age of 5.8 years (range: 3 to 12 years were enrolled in this study. All patients were confirmed to suffer from developmental delays, and were participating in traditional rehabilitation treatment. Children participated in two periods of 4 weeks each, group A being offered intervention of eight 30-minute sessions of interactive video games in the first period, and group B in the second, in addition to the traditional rehabilitation treatment. The physical, psychosocial, and total health of the children was periodically assessed using the parent-reported Pediatric Quality of Life Inventory-Generic Core Scales (PedsQL; and the children's upper extremity and physical function, transfer and basic mobility, sports and physical functioning, and global functioning were assessed using the Pediatric Outcomes Data Collection Instrument. Parental impact was evaluated using the PedsQL-Family Impact Module for family function, PedsQL-Health Satisfaction questionnaire for parents' satisfaction with their children's care and World Health Organization-Quality of Life-Brief Version for quality of life. Compared with the baseline, significant improvements of physical function were observed in both groups (5.6 ± 19.5, p = 0.013; 4.7 ± 13.8, p = 0.009 during the intervention periods. No significant improvement of psychosocial health, functional performance, or family impact was observed in children with developmental delays. Short-term interactive video game play in conjunction with traditional rehabilitation treatment improved the physical health of children with developmental delays.ClinicalTrials.gov NCT02184715.

  6. Effects of Video Game Training on Behavioral and Electrophysiological Measures of Attention and Memory: Protocol for a Randomized Controlled Trial.

    Science.gov (United States)

    Ballesteros, Soledad; Mayas, Julia; Ruiz-Marquez, Eloisa; Prieto, Antonio; Toril, Pilar; Ponce de Leon, Laura; de Ceballos, Maria L; Reales Avilés, José Manuel

    2017-01-24

    Neuroplasticity-based approaches seem to offer promising ways of maintaining cognitive health in older adults and postponing the onset of cognitive decline symptoms. Although previous research suggests that training can produce transfer effects, this study was designed to overcome some limitations of previous studies by incorporating an active control group and the assessment of training expectations. The main objectives of this study are (1) to evaluate the effects of a randomized computer-based intervention consisting of training older adults with nonaction video games on brain and cognitive functions that decline with age, including attention and spatial working memory, using behavioral measures and electrophysiological recordings (event-related potentials [ERPs]) just after training and after a 6-month no-contact period; (2) to explore whether motivation, engagement, or expectations might account for possible training-related improvements; and (3) to examine whether inflammatory mechanisms assessed with noninvasive measurement of C-reactive protein in saliva impair cognitive training-induced effects. A better understanding of these mechanisms could elucidate pathways that could be targeted in the future by either behavioral or neuropsychological interventions. A single-blinded randomized controlled trial with an experimental group and an active control group, pretest, posttest, and 6-month follow-up repeated measures design is used in this study. A total of 75 cognitively healthy older adults were randomly distributed into experimental and active control groups. Participants in the experimental group received 16 1-hour training sessions with cognitive nonaction video games selected from Lumosity, a commercial brain training package. The active control group received the same number of training sessions with The Sims and SimCity, a simulation strategy game. We have recruited participants, have conducted the training protocol and pretest assessments, and are

  7. Effects of Video Game Training on Behavioral and Electrophysiological Measures of Attention and Memory: Protocol for a Randomized Controlled Trial

    Science.gov (United States)

    Mayas, Julia; Ruiz-Marquez, Eloisa; Prieto, Antonio; Toril, Pilar; Ponce de Leon, Laura; de Ceballos, Maria L; Reales Avilés, José Manuel

    2017-01-01

    Background Neuroplasticity-based approaches seem to offer promising ways of maintaining cognitive health in older adults and postponing the onset of cognitive decline symptoms. Although previous research suggests that training can produce transfer effects, this study was designed to overcome some limitations of previous studies by incorporating an active control group and the assessment of training expectations. Objective The main objectives of this study are (1) to evaluate the effects of a randomized computer-based intervention consisting of training older adults with nonaction video games on brain and cognitive functions that decline with age, including attention and spatial working memory, using behavioral measures and electrophysiological recordings (event-related potentials [ERPs]) just after training and after a 6-month no-contact period; (2) to explore whether motivation, engagement, or expectations might account for possible training-related improvements; and (3) to examine whether inflammatory mechanisms assessed with noninvasive measurement of C-reactive protein in saliva impair cognitive training-induced effects. A better understanding of these mechanisms could elucidate pathways that could be targeted in the future by either behavioral or neuropsychological interventions. Methods A single-blinded randomized controlled trial with an experimental group and an active control group, pretest, posttest, and 6-month follow-up repeated measures design is used in this study. A total of 75 cognitively healthy older adults were randomly distributed into experimental and active control groups. Participants in the experimental group received 16 1-hour training sessions with cognitive nonaction video games selected from Lumosity, a commercial brain training package. The active control group received the same number of training sessions with The Sims and SimCity, a simulation strategy game. Results We have recruited participants, have conducted the training protocol

  8. Artificial Video for Video Analysis

    Science.gov (United States)

    Gallis, Michael R.

    2010-01-01

    This paper discusses the use of video analysis software and computer-generated animations for student activities. The use of artificial video affords the opportunity for students to study phenomena for which a real video may not be easy or even possible to procure, using analysis software with which the students are already familiar. We will…

  9. The transportation network rough description for an adaptive traffic control algorithms by means of video detection technique

    Directory of Open Access Journals (Sweden)

    Jan PIECHA

    2013-01-01

    Full Text Available The contribution discusses a transportation network rough description that corresponds to satisfactory level of an adaptive traffic control algorithms implementation [4], supported by video-detection system. The decision making algorithms have to provide us with not only vehicles’ approach time prediction, at the intersections but also finding possible solution for avoiding critical queues at the intersections. Majority of traditional traffic control systems are based on number of cars recorded by inductive loops, however they do not define any proper occupation states at any traffic lane. The time window indicated for passing the defined number of cars loses the distance gaps visible between the cars on the traffic lane. That is why remarkable part from the defined number of cars will not cross the intersection in the defined green light time. Procedures used for searching an optimal solution using the inductive measurements can, in the majority cases, be undoubtedly noticed as a theoretical analysis only.

  10. Brain training with non-action video games enhances aspects of cognition in older adults: a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Soledad eBallesteros

    2014-10-01

    Full Text Available Age-related cognitive and brain declines can result in functional deterioration in many cognitive domains, dependency, and dementia. A major goal of aging research is to investigate methods that help to maintain brain health, cognition, independent living and wellbeing in older adults. This randomized controlled study investigated the effects of 20 1-hr non-action video game training sessions with games selected from a commercially available package (Lumosity on a series of age-declined cognitive functions and subjective wellbeing. Two groups of healthy older adults participated in the study, the experimental group who received the training and the control group who attended three meetings with the research team along the study. Groups were similar at baseline on demographics, vocabulary, global cognition, and depression status. All participants were assessed individually before and after the intervention, or a similar period of time, using neuropsychological tests and laboratory tasks to investigate possible transfer effects. The results showed significant improvements in the trained group, and no variation in the control group, in processing speed (choice reaction time, attention (reduction of distraction and increase of alertness, immediate and delayed visual recognition memory, as well as a trend to improve in Affection and Assertivity, two dimensions of the Wellbeing Scale. Visuospatial working memory (WM and executive control (shifting strategy did not improve. Overall, the current results support the idea that training healthy older adults with non-action video games will enhance some cognitive abilities but not others. Trial Registration: ClinicalTrials.gov identifier NCT02007616http://clinicaltrials.gov/show/NCT02007616

  11. Contributions of Head-Mounted Cameras to Studying the Visual Environments of Infants and Young Children

    Science.gov (United States)

    Smith, Linda B.; Yu, Chen; Yoshida, Hanako; Fausey, Caitlin M.

    2015-01-01

    Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the…

  12. Infrared Camera Characterization of Bi-Propellant Reaction Control Engines during Auxiliary Propulsion Systems Tests at NASA's White Sands Test Facility in Las Cruces, New Mexico

    Science.gov (United States)

    Holleman, Elizabeth; Sharp, David; Sheller, Richard; Styron, Jason

    2007-01-01

    This paper describes the application of a FUR Systems A40M infrared (IR) digital camera for thermal monitoring of a Liquid Oxygen (LOX) and Ethanol bi-propellant Reaction Control Engine (RCE) during Auxiliary Propulsion System (APS) testing at the National Aeronautics & Space Administration's (NASA) White Sands Test Facility (WSTF) near Las Cruces, New Mexico. Typically, NASA has relied mostly on the use of ThermoCouples (TC) for this type of thermal monitoring due to the variability of constraints required to accurately map rapidly changing temperatures from ambient to glowing hot chamber material. Obtaining accurate real-time temperatures in the JR spectrum is made even more elusive by the changing emissivity of the chamber material as it begins to glow. The parameters evaluated prior to APS testing included: (1) remote operation of the A40M camera using fiber optic Firewire signal sender and receiver units; (2) operation of the camera inside a Pelco explosion proof enclosure with a germanium window; (3) remote analog signal display for real-time monitoring; (4) remote digital data acquisition of the A40M's sensor information using FUR's ThermaCAM Researcher Pro 2.8 software; and (5) overall reliability of the system. An initial characterization report was prepared after the A40M characterization tests at Marshall Space Flight Center (MSFC) to document controlled heat source comparisons to calibrated TCs. Summary IR digital data recorded from WSTF's APS testing is included within this document along with findings, lessons learned, and recommendations for further usage as a monitoring tool for the development of rocket engines.

  13. Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera

    Science.gov (United States)

    Fuhrman, Nicholas E.

    2016-01-01

    Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…

  14. vid113_0401r -- Video groundtruthing collected from RV Tatoosh during August 2005.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  15. Video stroke assessment (VSA) project: design and production of a prototype system for the remote diagnosis of stroke

    Science.gov (United States)

    Urias, Adrian R.; Draghic, Nicole; Lui, Janet; Cho, Angie; Curtis, Calvin; Espinosa, Joseluis; Wottawa, Christopher; Wiesmann, William P.; Schwamm, Lee H.

    2005-04-01

    Stroke remains the third most frequent cause of death in the United States and the leading cause of disability in adults. Long-term effects of ischemic stroke can be mitigated by the opportune administration of Tissue Plasminogen Activator (t-PA); however, the decision regarding the appropriate use of this therapy is dependant on timely, effective neurological assessment by a trained specialist. The lack of available stroke expertise is a key barrier preventing frequent use of t-PA. We report here on the development of a prototype research system capable of performing a semi-automated neurological examination from an offsite location via the Internet and a Computed Tomography (CT) scanner to facilitate the diagnosis and treatment of acute stroke. The Video Stroke Assessment (VSA) System consists of a video camera, a camera mounting frame, and a computer with software and algorithms to collect, interpret, and store patient neurological responses to stimuli. The video camera is mounted on a mobility track in front of the patient; camera direction and zoom are remotely controlled on a graphical user interface (GUI) by the specialist. The VSA System also performs a partially-autonomous examination based on the NIH Stroke Scale (NIHSS). Various response data indicative of stroke are recorded, analyzed and transmitted in real time to the specialist. The VSA provides unbiased, quantitative results for most categories of the NIHSS along with video and audio playback to assist in accurate diagnosis. The system archives the complete exam and results.

  16. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  17. High Speed Video Insertion

    Science.gov (United States)

    Janess, Don C.

    1984-11-01

    This paper describes a means of inserting alphanumeric characters and graphics into a high speed video signal and locking that signal to an IRIG B time code. A model V-91 IRIG processor, developed by Instrumentation Technology Systems under contract to Instrumentation Marketing Corporation has been designed to operate in conjunction with the NAC model FHS-200 High Speed Video Camera which operates at 200 fields per second. The system provides for synchronizing the vertical and horizontal drive signals such that the vertical sync precisely coincides with five millisecond transitions in the IRIG time code. Additionally, the unit allows for the insertion of an IRIG time message as well as other data and symbols.

  18. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  19. Virtual camera synthesis for soccer game replays

    Directory of Open Access Journals (Sweden)

    S. Sagas

    2013-07-01

    Full Text Available In this paper, we present a set of tools developed during the creation of a platform that allows the automatic generation of virtual views in a live soccer game production. Observing the scene through a multi-camera system, a 3D approximation of the players is computed and used for the synthesis of virtual views. The system is suitable both for static scenes, to create bullet time effects, and for video applications, where the virtual camera moves as the game plays.

  20. Tunneling Horizontal IEC 61850 Traffic through Audio Video Bridging Streams for Flexible Microgrid Control and Protection

    Directory of Open Access Journals (Sweden)

    Michael Short

    2016-03-01

    Full Text Available In this paper, it is argued that some low-level aspects of the usual IEC 61850 mapping to Ethernet are not well suited to microgrids due to their dynamic nature and geographical distribution as compared to substations. It is proposed that the integration of IEEE time-sensitive networking (TSN concepts (which are currently implemented as audio video bridging (AVB technologies within an IEC 61850 / Manufacturing Message Specification framework provides a flexible and reconfigurable platform capable of overcoming such issues. A prototype test platform and bump-in-the-wire device for tunneling horizontal traffic through AVB are described. Experimental results are presented for sending IEC 61850 GOOSE (generic object oriented substation events and SV (sampled values messages through AVB tunnels. The obtained results verify that IEC 61850 event and sampled data may be reliably transported within the proposed framework with very low latency, even over a congested network. It is argued that since AVB streams can be flexibly configured from one or more central locations, and bandwidth reserved for their data ensuring predictability of delivery, this gives a solution which seems significantly more reliable than a pure MMS-based solution.

  1. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    Science.gov (United States)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access controlVideo watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted

  2. Interconnected network of cameras

    Science.gov (United States)

    Hosseini Kamal, Mahdad; Afshari, Hossein; Leblebici, Yusuf; Schmid, Alexandre; Vandergheynst, Pierre

    2013-02-01

    The real-time development of multi-camera systems is a great challenge. Synchronization and large data rates of the cameras adds to the complexity of these systems as well. The complexity of such system also increases as the number of their incorporating cameras increases. The customary approach to implementation of such system is a central type, where all the raw stream from the camera are first stored then processed for their target application. An alternative approach is to embed smart cameras to these systems instead of ordinary cameras with limited or no processing capability. Smart cameras with intra and inter camera processing capability and programmability at the software and hardware level will offer the right platform for distributed and parallel processing for multi- camera systems real-time application development. Inter camera processing requires the interconnection of smart cameras in a network arrangement. A novel hardware emulating platform is introduced for demonstrating the concept of the interconnected network of cameras. A methodology is demonstrated for the interconnection network of camera construction and analysis. A sample application is developed and demonstrated.

  3. Registration of Sub-Sequence and Multi-Camera Reconstructions for Camera Motion Estimation

    Directory of Open Access Journals (Sweden)

    Michael Wand

    2010-08-01

    Full Text Available This paper presents different application scenarios for which the registration of sub-sequence reconstructions or multi-camera reconstructions is essential for successful camera motion estimation and 3D reconstruction from video. The registration is achieved by merging unconnected feature point tracks between the reconstructions. One application is drift removal for sequential camera motion estimation of long sequences. The state-of-the-art in drift removal is to apply a RANSAC approach to find unconnected feature point tracks. In this paper an alternative spectral algorithm for pairwise matching of unconnected feature point tracks is used. It is then shown that the algorithms can be combined and applied to novel scenarios where independent camera motion estimations must be registered into a common global coordinate system. In the first scenario multiple moving cameras, which capture the same scene simultaneously, are registered. A second new scenario occurs in situations where the tracking of feature points during sequential camera motion estimation fails completely, e.g., due to large occluding objects in the foreground, and the unconnected tracks of the independent reconstructions must be merged. In the third scenario image sequences of the same scene, which are captured under different illuminations, are registered. Several experiments with challenging real video sequences demonstrate that the presented techniques work in practice.

  4. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  5. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  6. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  7. Constrained space camera assembly

    Science.gov (United States)

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  8. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  9. Action Video Gaming and Cognitive Control: Playing First Person Shooter Games Is Associated with Improved Action Cascading but Not Inhibition.

    Science.gov (United States)

    Steenbergen, Laura; Sellaro, Roberta; Stock, Ann-Kathrin; Beste, Christian; Colzato, Lorenza S

    2015-01-01

    There is a constantly growing interest in developing efficient methods to enhance cognitive functioning and/or to ameliorate cognitive deficits. One particular line of research focuses on the possibly cognitive enhancing effects that action video game (AVG) playing may have on game players. Interestingly, AVGs, especially first person shooter games, require gamers to develop different action control strategies to rapidly react to fast moving visual and auditory stimuli, and to flexibly adapt their behaviour to the ever-changing context. This study investigated whether and to what extent experience with such videogames is associated with enhanced performance on cognitive control tasks that require similar abilities. Experienced action videogame-players (AVGPs) and individuals with little to no videogame experience (NVGPs) performed a stop-change paradigm that provides a relatively well-established diagnostic measure of action cascading and response inhibition. Replicating previous findings, AVGPs showed higher efficiency in response execution, but not improved response inhibition (i.e. inhibitory control), as compared to NVGPs. More importantly, compared to NVGPs, AVGPs showed enhanced action cascading processes when an interruption (stop) and a change towards an alternative response were required simultaneously, as well as when such a change had to occur after the completion of the stop process. Our findings suggest that playing AVGs is associated with enhanced action cascading and multi-component behaviour without affecting inhibitory control.

  10. Action Video Gaming and Cognitive Control: Playing First Person Shooter Games Is Associated with Improved Action Cascading but Not Inhibition.

    Directory of Open Access Journals (Sweden)

    Laura Steenbergen

    Full Text Available There is a constantly growing interest in developing efficient methods to enhance cognitive functioning and/or to ameliorate cognitive deficits. One particular line of research focuses on the possibly cognitive enhancing effects that action video game (AVG playing may have on game players. Interestingly, AVGs, especially first person shooter games, require gamers to develop different action control strategies to rapidly react to fast moving visual and auditory stimuli, and to flexibly adapt their behaviour to the ever-changing context. This study investigated whether and to what extent experience with such videogames is associated with enhanced performance on cognitive control tasks that require similar abilities. Experienced action videogame-players (AVGPs and individuals with little to no videogame experience (NVGPs performed a stop-change paradigm that provides a relatively well-established diagnostic measure of action cascading and response inhibition. Replicating previous findings, AVGPs showed higher efficiency in response execution, but not improved response inhibition (i.e. inhibitory control, as compared to NVGPs. More importantly, compared to NVGPs, AVGPs showed enhanced action cascading processes when an interruption (stop and a change towards an alternative response were required simultaneously, as well as when such a change had to occur after the completion of the stop process. Our findings suggest that playing AVGs is associated with enhanced action cascading and multi-component behaviour without affecting inhibitory control.

  11. Role of intercostal nerve block in reducing postoperative pain following video-assisted thoracoscopy: A randomized controlled trial.

    Science.gov (United States)

    Ahmed, Zulfiqar; Samad, Khalid; Ullah, Hameed

    2017-01-01

    The main advantages of video assisted thoracoscopic surgery (VATS) include less post-operative pain, rapid recovery, less postoperative complications, shorter hospital stay and early discharge. Although pain intensity is less as compared to conventional thoracotomy but still patients experience upto moderate pain postoperatively. The objective of this study was to assess the efficacy and morphine sparing effect of intercostal nerve block in alleviating immediate post-operative pain in patients undergoing VATS. Sixty ASA I-III patients, aged between 16 to 60 years, undergoing mediastinal lymph node biopsy through VATS under general anaesthesia were randomly divided into two groups. The intercostal nerve block (ICNB group) received the block along with patient control intravenous analgesia (PCIA) with morphine, while control group received only PCIA with morphine for post-operative analgesia. Patients were followed for twenty four hours post operatively for intervention of post-operative pain in the recovery room and ward. The pain was assessed using visual analogue scale (VAS) at 1, 6, 12 and 24 hours. There was a significant decrease in pain score and morphine consumption in ICNB group as compared to control group in first 6 hours postoperatively. There was no significant difference in pain scores and morphine consumption between the two groups after 6 hours. Patients receiving intercostal nerve block have better pain control and less morphine consumption as compared to those patients who did not receive intercostal nerve block in early (6 hours) post-operative period.

  12. The impact of red light running camera flashes on younger and older drivers' attention and oculomotor control.

    Science.gov (United States)

    Wright, Timothy J; Vitale, Thomas; Boot, Walter R; Charness, Neil

    2015-12-01

    Recent empirical evidence has suggested that the flashes associated with red light running cameras (RLRCs) distract younger drivers, pulling attention away from the roadway and delaying processing of safety-relevant events. Considering the perceptual and attentional declines that occur with age, older drivers may be especially susceptible to the distracting effects of RLRC flashes, particularly in situations in which the flash is more salient (a bright flash at night compared with the day). The current study examined how age and situational factors potentially influence attention capture by RLRC flashes using covert (cuing effects) and overt (eye movement) indices of capture. We manipulated the salience of the flash by varying its luminance and contrast with respect to the background of the driving scene (either day or night scenes). Results of 2 experiments suggest that simulated RLRC flashes capture observers' attention, but, surprisingly, no age differences in capture were observed. However, an analysis examining early and late eye movements revealed that older adults may have been strategically delaying their eye movements in order to avoid capture. Additionally, older adults took longer to disengage attention following capture, suggesting at least 1 age-related disadvantage in capture situations. Findings have theoretical implications for understanding age differences in attention capture, especially with respect to capture in real-world scenes, and inform future work that should examine how the distracting effects of RLRC flashes influence driver behavior.

  13. A High End Building Automation and Online Video Surveillance Security System

    Directory of Open Access Journals (Sweden)

    Iyer Adith Nagarajan

    2015-02-01

    Full Text Available This paper deals with the design and implementation of a building automation and security system which facilitates a healthy, flexible, comfortable and a secure environment to the residents. The design incorporates a SIRC (Sony Infrared Remote Control protocol based infrared remote controller for the wireless operation and control of electrical appliances. Alternatively, the appliances are monitored and controlled via a laptop using a GUI (Graphical User Interface application built in C#. Apart from automation, this paper also focuses on indoor security. Multiple PIR (Pyroelectric Infrared sensors are placed within the area under surveillance to detect any intruder. A web camera used to record the video footage is mounted on the shaft of a servo motor to enable angular motion. Corresponding to which sensor has detected the motion; the ARM7 LPC2148 microcontroller provides appropriate PWM pulses to drive the servo motor, thus adjusting the position and orientation of the camera precisely. OpenCV libraries are used to record a video feed of 5 seconds at 30 frames per second (fps. Video frames are embedded with date and time stamp. The recorded video is compressed, saved to predefined directory (for backup and also uploaded to specific remote location over the internet using Google drive for instant access. The entire security system is automatic and does not need any human intervention.

  14. Video Editing System

    Science.gov (United States)

    Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.

  15. What Video Styles can do for User Research

    DEFF Research Database (Denmark)

    Blauhut, Daniela; Buur, Jacob

    2009-01-01

    the video camera actually plays in studying people and establishing design collaboration still exists. In this paper we argue that traditional documentary film approaches like Direct Cinema and Cinéma Vérité show that a purely observational approach may not be most valuable for user research and that video...... material can be used in a variety of ways to explore, understand and present the everyday. Based on a comparison of several video studies of similar activities, but shot by different researchers, we develop the concept of ‘styles’ in video studies, and define three camera styles that may be a help...

  16. Evaluating intensified camera systems

    Energy Technology Data Exchange (ETDEWEB)

    S. A. Baker

    2000-07-01

    This paper describes image evaluation techniques used to standardize camera system characterizations. Key areas of performance include resolution, noise, and sensitivity. This team has developed a set of analysis tools, in the form of image processing software used to evaluate camera calibration data, to aid an experimenter in measuring a set of camera performance metrics. These performance metrics identify capabilities and limitations of the camera system, while establishing a means for comparing camera systems. Analysis software is used to evaluate digital camera images recorded with charge-coupled device (CCD) cameras. Several types of intensified camera systems are used in the high-speed imaging field. Electro-optical components are used to provide precise shuttering or optical gain for a camera system. These components including microchannel plate or proximity focused diode image intensifiers, electro-static image tubes, or electron-bombarded CCDs affect system performance. It is important to quantify camera system performance in order to qualify a system as meeting experimental requirements. The camera evaluation tool is designed to provide side-by-side camera comparison and system modeling information.

  17. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  18. Vision Sensors and Cameras

    Science.gov (United States)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  19. Use of the GlideScope Ranger Video Laryngoscope for Emergency Intubation in the Prehospital Setting: A Randomized Control Trial.

    Science.gov (United States)

    Trimmel, Helmut; Kreutziger, Janett; Fitzka, Robert; Szüts, Stephan; Derdak, Christoph; Koch, Elisabeth; Erwied, Boris; Voelckel, Wolfgang G

    2016-07-01

    We sought to assess whether the GlideScope Ranger video laryngoscope may be a reliable alternative to direct laryngoscopy in the prehospital setting. Multicenter, prospective, randomized, control trial with patient recruitment over 18 months. Four study centers operating physician-staffed rescue helicopters or ground units in Austria and Norway. Adult emergency patients requiring endotracheal intubation. Airway management strictly following a prehospital algorithm. First and second intubation attempt employing GlideScope or direct laryngoscopy as randomized; third attempt crossover. After three failed intubation attempts, immediate use of an extraglottic airway device. A total of 326 patients were enrolled. Success rate with the GlideScope (n = 168) versus direct laryngoscopy (n = 158) group was 61.9% (104/168) versus 96.2% (152/158), respectively (p intubation were failure to advance the tube into the larynx or trachea (26/168 vs 0/158; p intubation failed, direct laryngoscopy was successful in 61 of 64 patients (95.3%), whereas GlideScope enabled intubation in four of six cases (66.7%) where direct laryngoscopy failed (p = 0.055). In addition, GlideScope was prone to impaired visualization of the monitor because of ambient light (29/168; 17.3%). There was no correlation between success rates and body mass index, age, indication for airway management, or experience of the physicians, respectively. Video laryngoscopy is an established tool in difficult airway management, but our results shed light on the specific problems in the emergency medical service setting. Prehospital use of the GlideScope was associated with some major problems, thus resulting in a lower intubation success rate when compared with direct laryngoscopy.

  20. A comparison of King Vision video laryngoscopy and direct laryngoscopy as performed by residents: a randomized controlled trial.

    Science.gov (United States)

    Valencia, Jose A; Pimienta, Katherine; Cohen, Darwin; Benitez, Daniel; Romero, David; Amaya, Oswaldo; Arango, Enrique

    2016-12-01

    For more than 40 years, direct laryngoscopy (DL) has been used to assure the airway during endotracheal intubation. The King Vision video laryngoscope is one of the latest devices introduced for endotracheal intubation. We hypothesize that, relative to direct laryngoscopy, it improves the intubation success rate with fewer intubation attempts and no difference in intubation time or complications. This randomized controlled clinical trial included. The operating room and postanesthesia care unit of an academic hospital. Eighty-eight patients with American Society of Anesthesiologists status I to II and aged ≥18 years who were scheduled for elective surgery under general anesthesia and had no predictors of difficult airway. Patients were randomized (44 per group) to undergo intubation using either DL or King Vision video laryngoscopy (KVVL) performed by first year residents in anesthesia and intensive care. During endotracheal intubation by residents, measurements were success rate, number of attempts, time to intubation, visualization of the glottis, and presence of complications. Both groups had a 100% success rate. A greater frequency of grade 1 laryngoscopy was reported with KVVL (86.4%) relative to DL (59.1%) (P intubation or the number of attempts between the groups (P = .75 and P = .91, respectively). Complications after intubation were low and included oral trauma, esophageal intubation, and sore throat. The use of KVVL by residents with less than 1 year of training (considered nonexperts) significantly improves visualization of the glottis in patients without predictors of difficult airway. The incidence of complications was too low to draw conclusions. Copyright © 2016. Published by Elsevier Inc.