WorldWideScience

Sample records for control video cameras

  1. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  2. Remote control video cameras on a suborbital rocket

    International Nuclear Information System (INIS)

    Wessling, Francis C.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space

  3. Automated safety control by video cameras

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.; Somhorst, M.

    2012-01-01

    At this moment many surveillance systems are installed in public domains to control the safety of people and properties. They are constantly watched by human operators who are easily overloaded. To support the human operators, a surveillance system model is designed that detects suspicious behaviour

  4. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  5. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  6. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    Science.gov (United States)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  7. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  8. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  9. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  10. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  11. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  12. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  13. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  14. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  15. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  16. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  17. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  18. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  19. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  20. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  1. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  2. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  3. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  4. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  5. Video segmentation and camera motion characterization using compressed data

    Science.gov (United States)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  6. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  7. Endoscopic Camera Control by Head Movements for Thoracic Surgery

    NARCIS (Netherlands)

    Reilink, Rob; de Bruin, Gart; Franken, M.C.J.; Mariani, Massimo A.; Misra, Sarthak; Stramigioli, Stefano

    2010-01-01

    In current video-assisted thoracic surgery, the endoscopic camera is operated by an assistant of the surgeon, which has several disadvantages. This paper describes a system which enables the surgeon to control the endoscopic camera without the help of an assistant. The system is controlled using

  8. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  9. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  10. Advances in pediatric gastroenterology: introducing video camera capsule endoscopy.

    Science.gov (United States)

    Siaw, Emmanuel O

    2006-04-01

    The video camera capsule endoscope is a gastrointestinal endoscope approved by the U.S. Food and Drug Administration in 2001 for use in diagnosing gastrointestinal disorders in adults. In 2003, the agency approved the device for use in children ages 10 and older, and the endoscope is currently in use at Arkansas Children's Hospital. A capsule camera, lens, battery, transmitter and antenna together record images of the small intestine as the endoscope makes its way through the bowel. The instrument is used with minimal risk to the patient while offering a high degree of accuracy in diagnosing small intestine disorders.

  11. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  12. Teacher training for using digital video camera in primary education

    Directory of Open Access Journals (Sweden)

    Pablo García Sempere

    2011-12-01

    Full Text Available This paper shows the partial results of a research carried out in primary schools, which evaluates the ability of teachers in the use of digital video camera. The study took place in the province of Granada, Spain. Our purpose was to know the level of knowledge, interest, difficulties and training needs so as to improve the teaching practice. The work has been done from a descriptive and ecletic approach. Quantitative (questionnaire and qualitative techniques (focus group have been used in this research. The information obtained shows that most of the teachers have a lack of knowledge in the use of video camera and digital edition. On the other hand, the majority agrees to include initial and permanent training on this subject. Finally, the most important conclusions are presented.

  13. Identifying sports videos using replay, text, and camera motion features

    Science.gov (United States)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  14. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  15. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  16. Candid camera : video surveillance system can help protect assets

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2009-11-15

    By combining closed-circuit cameras with sophisticated video analytics to create video sensors for use in remote areas, Calgary-based IntelliView Technologies Inc.'s explosion-proof video surveillance system can help the oil and gas sector monitor its assets. This article discussed the benefits, features, and applications of IntelliView's technology. Some of the benefits include a reduced need for on-site security and operating personnel and its patented analytics product known as the SmrtDVR, where the camera's images are stored. The technology can be used in temperatures as cold as minus 50 degrees Celsius and as high as 50 degrees Celsius. The product was commercialized in 2006 when it was used by Nexen Inc. It was concluded that false alarms set off by natural occurrences such as rain, snow, glare and shadows were a huge problem with analytics in the past, but that problem has been solved by IntelliView, which has its own source code, and re-programmed code. 1 fig.

  17. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    Science.gov (United States)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  18. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  19. Localization of cask and plug remote handling system in ITER using multiple video cameras

    International Nuclear Information System (INIS)

    Ferreira, João; Vale, Alberto; Ribeiro, Isabel

    2013-01-01

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building

  20. Non-mydriatic, wide field, fundus video camera

    Science.gov (United States)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  1. Acceptance/operational test procedure 101-AW tank camera purge system and 101-AW video camera system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1994-01-01

    This procedure will document the satisfactory operation of the 101-AW Tank Camera Purge System (CPS) and the 101-AW Video Camera System. The safety interlock which shuts down all the electronics inside the 101-AW vapor space, during loss of purge pressure, will be in place and tested to ensure reliable performance. This procedure is separated into four sections. Section 6.1 is performed in the 306 building prior to delivery to the 200 East Tank Farms and involves leak checking all fittings on the 101-AW Purge Panel for leakage using a Snoop solution and resolving the leakage. Section 7.1 verifies that PR-1, the regulator which maintains a positive pressure within the volume (cameras and pneumatic lines), is properly set. In addition the green light (PRESSURIZED) (located on the Purge Control Panel) is verified to turn on above 10 in. w.g. and after the time delay (TDR) has timed out. Section 7.2 verifies that the purge cycle functions properly, the red light (PURGE ON) comes on, and that the correct flowrate is obtained to meet the requirements of the National Fire Protection Association. Section 7.3 verifies that the pan and tilt, camera, associated controls and components operate correctly. This section also verifies that the safety interlock system operates correctly during loss of purge pressure. During the loss of purge operation the illumination of the amber light (PURGE FAILED) will be verified

  2. Image-scanning measurement using video dissection cameras

    International Nuclear Information System (INIS)

    Carson, J.S.

    1978-01-01

    A high speed dimensional measuring system capable of scanning a thin film network, and determining if there are conductor widths, resistor widths, or spaces not typical of the design for this product is described. The eye of the system is a conventional TV camera, although such devices as image dissector cameras or solid-state scanners may be used more often in the future. The analog signal from the TV camera is digitized for processing by the computer and is presented to the TV monitor to assist the operator in monitoring the system's operation. Movable stages are required when the field of view of the scanner is less than the size of the object. A minicomputer controls the movement of the stage, and communicates with the digitizer to select picture points that are to be processed. Communications with the system are maintained through a teletype or CRT terminal

  3. Video astronomy on the go using video cameras with small telescopes

    CERN Document Server

    Ashley, Joseph

    2017-01-01

    Author Joseph Ashley explains video astronomy's many benefits in this comprehensive reference guide for amateurs. Video astronomy offers a wonderful way to see objects in far greater detail than is possible through an eyepiece, and the ability to use the modern, entry-level video camera to image deep space objects is a wonderful development for urban astronomers in particular, as it helps sidestep the issue of light pollution. The author addresses both the positive attributes of these cameras for deep space imaging as well as the limitations, such as amp glow. The equipment needed for imaging as well as how it is configured is identified with hook-up diagrams and photographs. Imaging techniques are discussed together with image processing (stacking and image enhancement). Video astronomy has evolved to offer great results and great ease of use, and both novices and more experienced amateurs can use this book to find the set-up that works best for them. Flexible and portable, they open up a whole new way...

  4. Underwater video enhancement using multi-camera super-resolution

    Science.gov (United States)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  5. Frequency identification of vibration signals using video camera image data.

    Science.gov (United States)

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  6. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  7. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  8. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  9. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  10. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    Science.gov (United States)

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High-Speed Video...Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras 5a. CONTRACT

  11. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  12. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  13. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  14. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David

    2007-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  15. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David; Ebrahimi, Touradj

    2008-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  16. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  17. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  18. Euratom experience with video surveillance - Single camera and other non-multiplexed

    International Nuclear Information System (INIS)

    Otto, P.; Cozier, T.; Jargeac, B.; Castets, J.P.; Wagner, H.G.; Chare, P.; Roewer, V.

    1991-01-01

    The Euratom Safeguards Directorate (ESD) has been using a number of single camera video systems (Ministar, MIVS, DCS) and non-multiplexed multi-camera systems (Digiquad) for routine safeguards surveillance applications during the last four years. This paper describes aspects of system design and considerations relevant for installation. It reports on system reliability and performance and presents suggestions on future improvements

  19. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  20. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1994-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program

  1. A model for measurement of noise in CCD digital-video cameras

    International Nuclear Information System (INIS)

    Irie, K; Woodhead, I M; McKinnon, A E; Unsworth, K

    2008-01-01

    This study presents a comprehensive measurement of CCD digital-video camera noise. Knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor. The robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise. The individual noise sources present in CCD sensors are well understood, but there has been little literature on the development of a complete noise model for CCD digital-video cameras, incorporating the effects of quantization and demosaicing

  2. Automatic mashup generation of multiple-camera videos

    NARCIS (Netherlands)

    Shrestha, P.

    2009-01-01

    The amount of user generated video content is growing enormously with the increase in availability and affordability of technologies for video capturing (e.g. camcorders, mobile-phones), storing (e.g. magnetic and optical devices, online storage services), and sharing (e.g. broadband internet,

  3. Virtual georeferencing : how an 11-eyed video camera helps the pipeline industry

    Energy Technology Data Exchange (ETDEWEB)

    Ball, C.

    2006-08-15

    Designed by Immersive Media Corporation (IMC) the new Telemmersion System is a lightweight camera system capable of generating synchronized high-resolution video streams that represent a full motion spherical world. With 11 cameras configured in a sphere, the system is portable and can be easily mounted on ground and air-borne vehicles for use in surveillance; integration; commanded control of interactive intelligent databases; scenario modelling; and specialized training. The system was recently used to georeference immersive video of existing and proposed pipeline routes. Footage from the system was used to supplement traditional data collection methods for environmental impact assessment; route selection and engineering; regulatory compliance; and public consultation. Traditionally, the medium used to visualize pipeline routes is a map overlaid with aerial photography. Pipeline routes are typically flown throughout the planning stages in order to give environmentalists, engineers and other personnel the opportunity to visually inspect the terrain, identify issues and make decisions. The technology has significantly reduced the costs during the planning stages of pipeline routes, as the remote footage can be stored on DVD, allowing various stakeholders and contractors to view the terrain and zoom in on particular land features. During one recent 3-day trip, 500 km of proposed pipeline was recorded using the technology. Typically, for various regulatory and environmental requirements, a half a dozen trips would have been required. 2 figs.

  4. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  5. video114_0402c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  6. video115_0403 -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  7. video114_0402b -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  8. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  9. Real-time pedestrian detection with the videos of car camera

    Directory of Open Access Journals (Sweden)

    Yunling Zhang

    2015-12-01

    Full Text Available Pedestrians in the vehicle path are in danger of being hit, thus causing severe injury to pedestrians and vehicle occupants. Therefore, real-time pedestrian detection with the video of vehicle-mounted camera is of great significance to vehicle–pedestrian collision warning and traffic safety of self-driving car. In this article, a real-time scheme was proposed based on integral channel features and graphics processing unit. The proposed method does not need to resize the input image. Moreover, the computationally expensive convolution of the detectors and the input image was converted into the dot product of two larger matrixes, which can be computed effectively using a graphics processing unit. The experiments showed that the proposed method could be employed to detect pedestrians in the video of car camera at 20+ frames per second with acceptable error rates. Thus, it can be applied in real-time detection tasks with the videos of car camera.

  10. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  11. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  12. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  13. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    Science.gov (United States)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  14. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    Science.gov (United States)

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  15. Design and Optimization of the VideoWeb Wireless Camera Network

    Directory of Open Access Journals (Sweden)

    Nguyen HoangThanh

    2010-01-01

    Full Text Available Sensor networks have been a very active area of research in recent years. However, most of the sensors used in the development of these networks have been local and nonimaging sensors such as acoustics, seismic, vibration, temperature, humidity. The emerging development of video sensor networks poses its own set of unique challenges, including high-bandwidth and low latency requirements for real-time processing and control. This paper presents a systematic approach by detailing the design, implementation, and evaluation of a large-scale wireless camera network, suitable for a variety of practical real-time applications. We take into consideration issues related to hardware, software, control, architecture, network connectivity, performance evaluation, and data-processing strategies for the network. We also perform multiobjective optimization on settings such as video resolution and compression quality to provide insight into the performance trade-offs when configuring such a network and present lessons learned in the building and daily usage of the network.

  16. Comparison of cardiopulmonary resuscitation techniques using video camera recordings.

    OpenAIRE

    Mann, C J; Heyworth, J

    1996-01-01

    OBJECTIVE--To use video recordings to compare the performance of resuscitation teams in relation to their previous training in cardiac resuscitation. METHODS--Over a 10 month period all cardiopulmonary resuscitations carried out in an accident and emergency (A&E) resuscitation room were videotaped. The following variables were monitored: (1) time to perform three defibrillatory shocks; (2) time to give intravenous adrenaline (centrally or peripherally); (3) the numbers and grade of medical an...

  17. Hydrogen peroxide plasma sterilization of a waterproof, high-definition video camera case for intraoperative imaging in veterinary surgery.

    Science.gov (United States)

    Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan

    2018-06-13

    To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.

  18. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    Science.gov (United States)

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  19. Optimization of radiation sensors for a passive terahertz video camera for security applications

    NARCIS (Netherlands)

    Zieger, G.J.M.

    2014-01-01

    A passive terahertz video camera allows for fast security screenings from distances of several meters. It avoids irradiation or the impressions of nakedness, which oftentimes cause embarrassment and trepidation of the concerned persons. This work describes the optimization of highly sensitive

  20. In-camera video-stream processing for bandwidth reduction in web inspection

    Science.gov (United States)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  1. Video content analysis on body-worn cameras for retrospective investigation

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  2. Toward standardising gamma camera quality control procedures

    International Nuclear Information System (INIS)

    Alkhorayef, M.A.; Alnaaimi, M.A.; Alduaij, M.A.; Mohamed, M.O.; Ibahim, S.Y.; Alkandari, F.A.; Bradley, D.A.

    2015-01-01

    Attaining high standards of efficiency and reliability in the practice of nuclear medicine requires appropriate quality control (QC) programs. For instance, the regular evaluation and comparison of extrinsic and intrinsic flood-field uniformity enables the quick correction of many gamma camera problems. Whereas QC tests for uniformity are usually performed by exposing the gamma camera crystal to a uniform flux of gamma radiation from a source of known activity, such protocols can vary significantly. Thus, there is a need for optimization and standardization, in part to allow direct comparison between gamma cameras from different vendors. In the present study, intrinsic uniformity was examined as a function of source distance, source activity, source volume and number of counts. The extrinsic uniformity and spatial resolution were also examined. Proper standard QC procedures need to be implemented because of the continual development of nuclear medicine imaging technology and the rapid expansion and increasing complexity of hybrid imaging system data. The present work seeks to promote a set of standard testing procedures to contribute to the delivery of safe and effective nuclear medicine services. - Highlights: • Optimal parameters for quality control of the gamma camera are proposed. • For extrinsic and intrinsic uniformity a minimum of 15,000 counts is recommended. • For intrinsic flood uniformity the activity should not exceed 100 µCi (3.7 MBq). • For intrinsic uniformity the source to detector distance should be at least 60 cm. • The bar phantom measurement must be performed with at least 15 million counts.

  3. Registration of retinal sequences from new video-ophthalmoscopic camera.

    Science.gov (United States)

    Kolar, Radim; Tornow, Ralf P; Odstrcilik, Jan; Liberdova, Ivana

    2016-05-20

    Analysis of fast temporal changes on retinas has become an important part of diagnostic video-ophthalmology. It enables investigation of the hemodynamic processes in retinal tissue, e.g. blood-vessel diameter changes as a result of blood-pressure variation, spontaneous venous pulsation influenced by intracranial-intraocular pressure difference, blood-volume changes as a result of changes in light reflection from retinal tissue, and blood flow using laser speckle contrast imaging. For such applications, image registration of the recorded sequence must be performed. Here we use a new non-mydriatic video-ophthalmoscope for simple and fast acquisition of low SNR retinal sequences. We introduce a novel, two-step approach for fast image registration. The phase correlation in the first stage removes large eye movements. Lucas-Kanade tracking in the second stage removes small eye movements. We propose robust adaptive selection of the tracking points, which is the most important part of tracking-based approaches. We also describe a method for quantitative evaluation of the registration results, based on vascular tree intensity profiles. The achieved registration error evaluated on 23 sequences (5840 frames) is 0.78 ± 0.67 pixels inside the optic disc and 1.39 ± 0.63 pixels outside the optic disc. We compared the results with the commonly used approaches based on Lucas-Kanade tracking and scale-invariant feature transform, which achieved worse results. The proposed method can efficiently correct particular frames of retinal sequences for shift and rotation. The registration results for each frame (shift in X and Y direction and eye rotation) can also be used for eye-movement evaluation during single-spot fixation tasks.

  4. Surgical video recording with a modified GoPro Hero 4 camera

    Directory of Open Access Journals (Sweden)

    Lin LK

    2016-01-01

    Full Text Available Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method: The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results: Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion: The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. Keywords: teaching, oculoplastic, strabismus

  5. Surgical video recording with a modified GoPro Hero 4 camera.

    Science.gov (United States)

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  6. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

    Directory of Open Access Journals (Sweden)

    Mateusz Brzeszcz

    2015-08-01

    Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

  7. Algorithms for the automatic identification of MARFEs and UFOs in JET database of visible camera videos

    International Nuclear Information System (INIS)

    Murari, A.; Camplani, M.; Cannas, B.; Usai, P.; Mazon, D.; Delaunay, F.

    2010-01-01

    MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The objective is to retrieve the videos, which have captured these events, exploring the whole JET database of images, as a preliminary step to the development of real-time identifiers in the future. For the detection of MARFEs, a complete identifier has been finalized, using morphological operators and Hu moments. The final algorithm manages to identify the videos with MARFEs with a success rate exceeding 80%. Due to the lack of a complete statistics of examples, the UFO identifier is less developed, but a preliminary code can detect UFOs quite reliably. (authors)

  8. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  9. Video control system for a drilling in furniture workpiece

    Science.gov (United States)

    Khmelev, V. L.; Satarov, R. N.; Zavyalova, K. V.

    2018-05-01

    During last 5 years, Russian industry has being starting to be a robotic, therefore scientific groups got new tasks. One of new tasks is machine vision systems, which should solve problem of automatic quality control. This type of systems has a cost of several thousand dollars each. The price is impossible for regional small business. In this article, we describe principle and algorithm of cheap video control system, which one uses web-cameras and notebook or desktop computer as a computing unit.

  10. A passive terahertz video camera based on lumped element kinetic inductance detectors

    International Nuclear Information System (INIS)

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian; Wood, Ken; Grainger, William; Mauskopf, Philip; Spencer, Locke

    2016-01-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  11. A passive terahertz video camera based on lumped element kinetic inductance detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Wood, Ken [QMC Instruments Ltd., School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Grainger, William [Rutherford Appleton Laboratory, STFC, Swindon SN2 1SZ (United Kingdom); Mauskopf, Philip [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); School of Earth Science and Space Exploration, Arizona State University, Tempe, Arizona 85281 (United States); Spencer, Locke [Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada)

    2016-03-15

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  12. Design of IP Camera Access Control Protocol by Utilizing Hierarchical Group Key

    Directory of Open Access Journals (Sweden)

    Jungho Kang

    2015-08-01

    Full Text Available Unlike CCTV, security video surveillance devices, which we have generally known about, IP cameras which are connected to a network either with or without wire, provide monitoring services through a built-in web-server. Due to the fact that IP cameras can use a network such as the Internet, multiple IP cameras can be installed at a long distance and each IP camera can utilize the function of a web server individually. Even though IP cameras have this kind of advantage, it has difficulties in access control management and weakness in user certification, too. Particularly, because the market of IP cameras did not begin to be realized a long while ago, systems which are systematized from the perspective of security have not been built up yet. Additionally, it contains severe weaknesses in terms of access authority to the IP camera web server, certification of users, and certification of IP cameras which are newly installed within a network, etc. This research grouped IP cameras hierarchically to manage them systematically, and provided access control and data confidentiality between groups by utilizing group keys. In addition, IP cameras and users are certified by using PKI-based certification, and weak points of security such as confidentiality and integrity, etc., are improved by encrypting passwords. Thus, this research presents specific protocols of the entire process and proved through experiments that this method can be actually applied.

  13. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  14. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  15. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    International Nuclear Information System (INIS)

    Strehlow, J.P.

    1994-01-01

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE' s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1)

  16. Combining local and global optimisation for virtual camera control

    OpenAIRE

    Burelli, Paolo; Yannakakis, Georgios N.; 2010 IEEE Symposium on Computational Intelligence and Games

    2010-01-01

    Controlling a virtual camera in 3D computer games is a complex task. The camera is required to react to dynamically changing environments and produce high quality visual results and smooth animations. This paper proposes an approach that combines local and global search to solve the virtual camera control problem. The automatic camera control problem is described and it is decomposed into sub-problems; then a hierarchical architecture that solves each sub-problem using the most appropriate op...

  17. A lateral chromatic aberration correction system for ultrahigh-definition color video camera

    Science.gov (United States)

    Yamashita, Takayuki; Shimamoto, Hiroshi; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed color camera for an 8k x 4k-pixel ultrahigh-definition video system, which is called Super Hi- Vision, with a 5x zoom lens and a signal-processing system incorporating a function for real-time lateral chromatic aberration correction. The chromatic aberration of the lens degrades color image resolution. So in order to develop a compact zoom lens consistent with ultrahigh-resolution characteristics, we incorporated a real-time correction function in the signal-processing system. The signal-processing system has eight memory tables to store the correction data at eight focal length points on the blue and red channels. When the focal length data is inputted from the lens control units, the relevant correction data are interpolated from two of eights correction data tables. This system performs geometrical conversion on both channels using this correction data. This paper describes that the correction function can successfully reduce the lateral chromatic aberration, to an amount small enough to ensure the desired image resolution was achieved over the entire range of the lens in real time.

  18. CRED Fish Observations from Stereo Video Cameras on a SeaBED AUV collected around Tutuila, American Samoa in 2012

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Black and white imagery were collected using a stereo pair of underwater video cameras mounted on a SeaBED autonomous underwater vehicle (AUV) and deployed around...

  19. Performance and quality control of scintillation cameras

    International Nuclear Information System (INIS)

    Moretti, J.L.; Iachetti, D.

    1983-01-01

    Acceptance testing, quality and control assurance of gamma-cameras are a part of diagnostic quality in clinical practice. Several parameters are required to achieve a good diagnostic reliability: intrinsic spatial resolution, spatial linearity, uniformities, energy resolution, count-rate characteristics, multiple window spatial analysis. Each parameter was measured and also estimated by a test easy to implement in routine practice. Material required was a 4028 multichannel analyzer linked to a microcomputeur, mini-computers and a set of phantoms (parallel slits, diffusing phantom, orthogonal hole transmission pattern). Gamma-cameras on study were:CGR 3400, CGR 3420, G.E.4000. Siemens ZLC 75 and large field Philips. Several tests proposed by N.E.M.A. and W.H.O. have to be improved concerning too punctual spatial determinations during distortion measurements with multiple window. Contrast control of image need to be monitored with high counting rate. This study shows the need to avoid punctual determinations and the interest to give sets of values of the same parameter on the whole field and to report mean values with their standard variation [fr

  20. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    Science.gov (United States)

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. Utilization of an video camera in study of the goshawk (Accipiter gentilis diet

    Directory of Open Access Journals (Sweden)

    Martin Tomešek

    2011-01-01

    Full Text Available In 2009, research was carried out into the food spectrum of goshawk (Accipiter gentilis by means of automatic digital video cameras with a recoding device in the area of the Chřiby Upland. The monitoring took place at two localities in the vicinity of the village of Buchlovice at the southeastern edge of the Chřiby Upland in a period from hatching the chicks to their flying out from a nest. The unambiguous advantage of using the camera systems at the study of food spectrum is a possibility of the exact determination of brought preys in the majority of cases. As much as possible economic and effective technology prepared according to given conditions was used. Results of using automatic digital video cameras with a recoding device consist in a number of valuable data, which clarify the food spectrum of a given species. The main output of the whole project is determination of the food spectrum of goshawk (Accipiter gentilis from two localities, which showed the following composition: 89 % birds, 9.5 % mammals and 1.5 % other animals or unidentifiable components of food. Birds of the genus Turdus were the most frequent prey in both cases of monitoring. As for mammals, Sciurus vulgaris was most frequent.

  2. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  3. A generic model for camera based intelligent road crowd control ...

    African Journals Online (AJOL)

    This research proposes a model for intelligent traffic flow control by implementing camera based surveillance and feedback system. A series of cameras are set minimum three signals ahead from the target junction. The complete software system is developed to help integrating the multiple camera on road as feedback to ...

  4. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  5. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    Science.gov (United States)

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  6. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    Science.gov (United States)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  7. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    Directory of Open Access Journals (Sweden)

    Antonio Sánchez-Esguevillas

    2012-08-01

    Full Text Available This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  8. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition

  9. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Naoki Wakamiya

    2010-08-01

    Full Text Available A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  10. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  11. LAMOST CCD camera-control system based on RTS2

    Science.gov (United States)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  12. Interaction Control Protocols for Distributed Multi-user Multi-camera Environments

    Directory of Open Access Journals (Sweden)

    Gareth W Daniel

    2003-10-01

    Full Text Available Video-centred communication (e.g., video conferencing, multimedia online learning, traffic monitoring, and surveillance is becoming a customary activity in our lives. The management of interactions in such an environment is a complicated HCI issue. In this paper, we present our study on a collection of interaction control protocols for distributed multiuser multi-camera environments. These protocols facilitate different approaches to managing a user's entitlement for controlling a particular camera. We describe a web-based system that allows multiple users to manipulate multiple cameras in varying remote locations. The system was developed using the Java framework, and all protocols discussed have been incorporated into the system. Experiments were designed and conducted to evaluate the effectiveness of these protocols, and to enable the identification of various human factors in a distributed multi-user and multi-camera environment. This work provides an insight into the complexity associated with the interaction management in video-centred communication. It can also serve as a conceptual and experimental framework for further research in this area.

  13. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  14. vid116_0501n -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  15. vid116_0501s -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  16. vid116_0501c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  17. vid116_0501d -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  18. Control system for several rotating mirror camera synchronization operation

    Science.gov (United States)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  19. Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera:

    International Nuclear Information System (INIS)

    Moss, K.J.

    1990-09-01

    Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed for I-131. Plant uptake was greater when I-131 was applied at soil depths with higher root concentrations. When I-131 was applied at soil depths with lower root concentrations, plant uptake was less. However, the relationship between root concentration and plant uptake was not a direct one. When I-131 was delivered to deeper soil depths with low root concentrations, the quantity of roots there appeared to be less effective in uptake than the same quantity of roots at shallow soil depths with high root concentration. 29 refs., 6 figs., 11 tabs

  20. Dynamic Artificial Potential Fields for Autonomous Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    the implementation and evaluation of Artificial Potential Fields for automatic camera placement. We first describe the re- casting of the frame composition problem as a solution to a two particles suspended in an Artificial Potential Field. We demonstrate the application of this technique to control both camera...

  1. Playing Action Video Games Improves Visuomotor Control.

    Science.gov (United States)

    Li, Li; Chen, Rongrong; Chen, Jing

    2016-08-01

    Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non-action gamers. We then trained non-action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non-action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving. © The Author(s) 2016.

  2. Two-Stage Classification Approach for Human Detection in Camera Video in Bulk Ports

    Directory of Open Access Journals (Sweden)

    Mi Chao

    2015-09-01

    Full Text Available With the development of automation in ports, the video surveillance systems with automated human detection begun to be applied in open-air handling operation areas for safety and security. The accuracy of traditional human detection based on the video camera is not high enough to meet the requirements of operation surveillance. One of the key reasons is that Histograms of Oriented Gradients (HOG features of the human body will show great different between front & back standing (F&B and side standing (Side human body. Therefore, the final training for classifier will only gain a few useful specific features which have contribution to classification and are insufficient to support effective classification, while using the HOG features directly extracted by the samples from different human postures. This paper proposes a two-stage classification method to improve the accuracy of human detection. In the first stage, during preprocessing classification, images is mainly divided into possible F&B human body and not F&B human body, and then they were put into the second-stage classification among side human and non-human recognition. The experimental results in Tianjin port show that the two-stage classifier can improve the classification accuracy of human detection obviously.

  3. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    Czech Academy of Sciences Publication Activity Database

    Pospíšil, Jaroslav; Jakubík, P.; Machala, L.

    2005-01-01

    Roč. 116, - (2005), s. 573-585 ISSN 0030-4026 Institutional research plan: CEZ:AV0Z10100522 Keywords : random-target measuring method * light-reflection white - noise target * digital video camera * modulation transfer function * power spectral density Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.395, year: 2005

  4. Quality control of plane and tomographic gamma cameras

    International Nuclear Information System (INIS)

    Moretti, J.L.; Roussi, A.

    1993-01-01

    In this article, the authors present different methods of gamma camera quality control in matters of uniformity, spatial resolution, spatial linearity, sensitivity, energy resolution, counting rate performance, SPECT parameters. The authors refer mainly to NEMA standards. 14 figs., 8 tabs

  5. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  6. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  7. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  8. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  9. Live video monitoring robot controlled by web over internet

    Science.gov (United States)

    Lokanath, M.; Akhil Sai, Guruju

    2017-11-01

    Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.

  10. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  11. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  12. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  13. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  14. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  15. New system for linear accelerator radiosurgery with a gantry-mounted video camera

    International Nuclear Information System (INIS)

    Kunieda, Etsuo; Kitamura, Masayuki; Kawaguchi, Osamu; Ohira, Takayuki; Ogawa, Kouichi; Ando, Yutaka; Nakamura, Kayoko; Kubo, Atsushi

    1998-01-01

    Purpose: We developed a positioning method that does not depend on the positioning mechanism originally annexed to the linac and investigated the positioning errors of the system. Methods and Materials: A small video camera was placed at a location optically identical to the linac x-ray source. A target pointer comprising a convex lens and bull's eye was attached to the arc of the Leksell stereotactic system so that the lens would form a virtual image of the bull's eye (virtual target) at the position of the center of the arc. The linac gantry and target pointer were placed at the side and top to adjust the arc center to the isocenter by referring the virtual target. Coincidence of the target and the isocenter could be confirmed in any combination of the couch and gantry rotation. In order to evaluate the accuracy of the positioning, a tungsten ball was attached to the stereotactic frame as a simulated target, which was repeatedly localized and repositioned to estimate the magnitude of the error. The center of the circular field defined by the collimator was marked on the film. Results: The differences between the marked centers of the circular field and the centers of the shadow of the simulated target were less than 0.3 mm

  16. Video-rate or high-precision: a flexible range imaging camera

    Science.gov (United States)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  17. Robust Visual Control of Parallel Robots under Uncertain Camera Orientation

    Directory of Open Access Journals (Sweden)

    Miguel A. Trujano

    2012-10-01

    Full Text Available This work presents a stability analysis and experimental assessment of a visual control algorithm applied to a redundant planar parallel robot under uncertainty in relation to camera orientation. The key feature of the analysis is a strict Lyapunov function that allows the conclusion of asymptotic stability without invoking the Barbashin-Krassovsky-LaSalle invariance theorem. The controller does not rely on velocity measurements and has a structure similar to a classic Proportional Derivative control algorithm. Experiments in a laboratory prototype show that uncertainty in camera orientation does not significantly degrade closed-loop performance.

  18. Observation of the dynamic movement of fragmentations by high-speed camera and high-speed video

    Science.gov (United States)

    Suk, Chul-Gi; Ogata, Yuji; Wada, Yuji; Katsuyama, Kunihisa

    1995-05-01

    The experiments of blastings using mortal concrete blocks and model concrete columns were carried out in order to obtain technical information on fragmentation caused by the blasting demolition. The dimensions of mortal concrete blocks were 1,000 X 1,000 X 1,000 mm. Six kinds of experimental blastings were carried out using mortal concrete blocks. In these experiments precision detonators and No. 6 electric detonators with 10 cm detonating fuse were used and discussed the control of fragmentation. As the results of experiment it was clear that the flying distance of fragmentation can be controlled using a precise blasting system. The reinforced concrete model columns for typical apartment houses in Japan were applied to the experiments. The dimension of concrete test column was 800 X 800 X 2400 mm and buried 400 mm in the ground. The specified design strength of the concrete was 210 kgf/cm2. These columns were exploded by the blasting with internal loading of dynamite. The fragmentation were observed by two kinds of high speed camera with 500 and 2000 FPS and a high speed video with 400 FPS. As one of the results in the experiments, the velocity of fragmentation, blasted 330 g of explosive with the minimum resisting length of 0.32 m, was measured as much as about 40 m/s.

  19. Video digitizer (real time-frame grabber) with region of interest suitable for quantitative data analysis used on the infrared and H alpha cameras installed on the DIII-D experiment

    International Nuclear Information System (INIS)

    Ferguson, S.W.; Kevan, D.K.; Hill, D.N.; Allen, S.L.

    1987-01-01

    This paper describes a CAMAC based video digitizer with region of interest (ROI) capability that was designed for use with the infrared and H alpha cameras installed by Lawrence Livermore Laboratory on the DIII-D experiment at G.A. Technologies in San Diego, California. The video digitizer uses a custom built CAMAC video synchronizer module to clock data into a CAMAC transient recorder on a line-by-line basis starting at the beginning of a field. The number of fields that are recorded is limited only by the available transient recorder memory. In order to conserve memory, the CAMAC video synchronizer module provides for the alternative selection of a specific region of interest in each successive field to be recorded. Memory conservation can be optimized by specifying lines in the field, start time, stop time, and the number of data samples per line. This video frame grabber has proved versatile for capturing video in such diverse applications as recording video fields from a video tape recorder played in slow motion or recording video fields in real time during a DIII-D shot. In other cases, one or more lines of video are recorded per frame to give a cross sectional slice of the plasma. Since all the data in the digitizer memory is synchronized to video fields and lines, the data can be read directly into the control computer in the proper matrix format to facilitate rapid processing, display, and permanent storage

  20. Video content analysis on body-worn cameras for retrospective investigation

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Haar, F.B. ter; Eendebak, P.T.; Hollander, R.J.M. den; Burghouts, G.J.; Wijn, R.; Broek, S.P. van den; Rest, J.H.C. van

    2015-01-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications

  1. A multiframe soft x-ray camera with fast video capture for the LSX field reversed configuration (FRC) experiment

    International Nuclear Information System (INIS)

    Crawford, E.A.

    1992-01-01

    Soft x-ray pinhole imaging has proven to be an exceptionally useful diagnostic for qualitative observation of impurity radiation from field reversed configuration plasmas. We used a four frame device, similar in design to those discussed in an earlier paper [E. A. Crawford, D. P. Taggart, and A. D. Bailey III, Rev. Sci. Instrum. 61, 2795 (1990)] as a routine diagnostic during the last six months of the Large s Experiment (LSX) program. Our camera is an improvement over earlier implementations in several significant aspects. It was designed and used from the onset of the LSX experiments with a video frame capture system so that an instant visual record of the shot was available to the machine operator as well as facilitating quantitative interpretation of intensity information recorded in the images. The camera was installed in the end region of the LSX on axis approximately 5.5 m from the plasma midplane. Experience with bolometers on LSX showed serious problems with ''particle dumps'' at the axial location at various times during the plasma discharge. Therefore, the initial implementation of the camera included an effective magnetic sweeper assembly. Overall performance of the camera, video capture system, and sweeper is discussed

  2. HDR 192Ir source speed measurements using a high speed video camera

    International Nuclear Information System (INIS)

    Fonseca, Gabriel P.; Viana, Rodrigo S. S.; Yoriyaz, Hélio; Podesta, Mark; Rubo, Rodrigo A.; Sales, Camila P. de; Reniers, Brigitte; Verhaegen, Frank

    2015-01-01

    Purpose: The dose delivered with a HDR 192 Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a 192 Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases

  3. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  4. Spatiotemporal video deinterlacing using control grid interpolation

    Science.gov (United States)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  5. Direct measurement of erythrocyte deformability in diabetes mellitus with a transparent microchannel capillary model and high-speed video camera system.

    Science.gov (United States)

    Tsukada, K; Sekizuka, E; Oshio, C; Minamitani, H

    2001-05-01

    To measure erythrocyte deformability in vitro, we made transparent microchannels on a crystal substrate as a capillary model. We observed axisymmetrically deformed erythrocytes and defined a deformation index directly from individual flowing erythrocytes. By appropriate choice of channel width and erythrocyte velocity, we could observe erythrocytes deforming to a parachute-like shape similar to that occurring in capillaries. The flowing erythrocytes magnified 200-fold through microscopy were recorded with an image-intensified high-speed video camera system. The sensitivity of deformability measurement was confirmed by comparing the deformation index in healthy controls with erythrocytes whose membranes were hardened by glutaraldehyde. We confirmed that the crystal microchannel system is a valuable tool for erythrocyte deformability measurement. Microangiopathy is a characteristic complication of diabetes mellitus. A decrease in erythrocyte deformability may be part of the cause of this complication. In order to identify the difference in erythrocyte deformability between control and diabetic erythrocytes, we measured erythrocyte deformability using transparent crystal microchannels and a high-speed video camera system. The deformability of diabetic erythrocytes was indeed measurably lower than that of erythrocytes in healthy controls. This result suggests that impaired deformability in diabetic erythrocytes can cause altered viscosity and increase the shear stress on the microvessel wall. Copyright 2001 Academic Press.

  6. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  7. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    Science.gov (United States)

    Pospisil, J.; Jakubik, P.; Machala, L.

    2005-11-01

    This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.

  8. Scintillation camera-computer systems: General principles of quality control

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    Scintillation camera-computer systems are designed to allow the collection, digital analysis and display of the image data from a scintillation camera. The components of the computer in such a system are essentially the same as those of a computer used in any other application, i.e. a central processing unit (CPU), memory and magnetic storage. Additional hardware items necessary for nuclear medicine applications are an analogue-to-digital converter (ADC), which converts the analogue signals from the camera to digital numbers, and an image display. It is possible that the transfer of data from camera to computer degrades the information to some extent. The computer can generate the image for display, but it also provides the capability of manipulating the primary data to improve the display of the image. The first function of conversion from analogue to digital mode is not within the control of the operator, but the second type of manipulation is in the control of the operator. These type of manipulations should be done carefully without sacrificing the integrity of the incoming information

  9. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    International Nuclear Information System (INIS)

    WERRY, S.M.

    2000-01-01

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151

  10. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    Energy Technology Data Exchange (ETDEWEB)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  11. Lights, Camera, Action: Advancing Learning, Research, and Program Evaluation through Video Production in Educational Leadership Preparation

    Science.gov (United States)

    Friend, Jennifer; Militello, Matthew

    2015-01-01

    This article analyzes specific uses of digital video production in the field of educational leadership preparation, advancing a three-part framework that includes the use of video in (a) teaching and learning, (b) research methods, and (c) program evaluation and service to the profession. The first category within the framework examines videos…

  12. Intelligent control for scalable video processing

    NARCIS (Netherlands)

    Wüst, C.C.

    2006-01-01

    In this thesis we study a problem related to cost-effective video processing in software by consumer electronics devices, such as digital TVs. Video processing is the task of transforming an input video signal into an output video signal, for example to improve the quality of the signal. This

  13. Measuring the Angular Velocity of a Propeller with Video Camera Using Electronic Rolling Shutter

    Directory of Open Access Journals (Sweden)

    Yipeng Zhao

    2018-01-01

    Full Text Available Noncontact measurement for rotational motion has advantages over the traditional method which measures rotational motion by means of installing some devices on the object, such as a rotary encoder. Cameras can be employed as remote monitoring or inspecting sensors to measure the angular velocity of a propeller because of their commonplace availability, simplicity, and potentially low cost. A defect of the measurement with cameras is to process the massive data generated by cameras. In order to reduce the collected data from the camera, a camera using ERS (electronic rolling shutter is applied to measure angular velocities which are higher than the speed of the camera. The effect of rolling shutter can induce geometric distortion in the image, when the propeller rotates during capturing an image. In order to reveal the relationship between the angular velocity and the image distortion, a rotation model has been established. The proposed method was applied to measure the angular velocities of the two-blade propeller and the multiblade propeller. The experimental results showed that this method could detect the angular velocities which were higher than the camera speed, and the accuracy was acceptable.

  14. Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter

    Science.gov (United States)

    Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun

    2018-04-01

    Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.

  15. Digital quality control of the camera computer interface

    International Nuclear Information System (INIS)

    Todd-Pokropek, A.

    1983-01-01

    A brief description is given of how the gamma camera-computer interface works and what kind of errors can occur. Quality control tests of the interface are then described which include 1) tests of static performance e.g. uniformity, linearity, 2) tests of dynamic performance e.g. basic timing, interface count-rate, system count-rate, 3) tests of special functions e.g. gated acquisition, 4) tests of the gamma camera head, and 5) tests of the computer software. The tests described are mainly acceptance and routine tests. Many of the tests discussed are those recommended by an IAEA Advisory Group for inclusion in the IAEA control schedules for nuclear medicine instrumentation. (U.K.)

  16. Real-Time Range Sensing Video Camera for Human/Robot Interfacing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In comparison to stereovision, it is well known that structured-light illumination has distinct advantages including the use of only one camera, being significantly...

  17. Game Cinematography: from Camera Control to Player Emotions

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2016-01-01

    Building on the definition of cinematography (Soanes and Stevenson, Oxford dictionary of English. Oxford University Press, Oxford/New York, 2005), game cinematography can be defined as the art of visualizing the content of a computer game. The relationship between game cinematography and its...... traditional counterpart is extremely tight as, in both cases, the aim of cinematography is to control the viewer’s perspective and affect his or her perception of the events represented. However, game events are not necessarily pre-scripted and player interaction has a major role on the quality of a game...... experience; therefore, the role of the camera and the challenges connected to it are different in game cinematography as the virtual camera has to both dynamically react to unexpected events to correctly convey the game story and take into consideration player actions and desires to support her interaction...

  18. Continuous Learning of a Multilayered Network Topology in a Video Camera Network

    Directory of Open Access Journals (Sweden)

    Zou Xiaotao

    2009-01-01

    Full Text Available Abstract A multilayered camera network architecture with nodes as entry/exit points, cameras, and clusters of cameras at different layers is proposed. Unlike existing methods that used discrete events or appearance information to infer the network topology at a single level, this paper integrates face recognition that provides robustness to appearance changes and better models the time-varying traffic patterns in the network. The statistical dependence between the nodes, indicating the connectivity and traffic patterns of the camera network, is represented by a weighted directed graph and transition times that may have multimodal distributions. The traffic patterns and the network topology may be changing in the dynamic environment. We propose a Monte Carlo Expectation-Maximization algorithm-based continuous learning mechanism to capture the latent dynamically changing characteristics of the network topology. In the experiments, a nine-camera network with twenty-five nodes (at the lowest level is analyzed both in simulation and in real-life experiments and compared with previous approaches.

  19. Continuous Learning of a Multilayered Network Topology in a Video Camera Network

    Directory of Open Access Journals (Sweden)

    Xiaotao Zou

    2009-01-01

    Full Text Available A multilayered camera network architecture with nodes as entry/exit points, cameras, and clusters of cameras at different layers is proposed. Unlike existing methods that used discrete events or appearance information to infer the network topology at a single level, this paper integrates face recognition that provides robustness to appearance changes and better models the time-varying traffic patterns in the network. The statistical dependence between the nodes, indicating the connectivity and traffic patterns of the camera network, is represented by a weighted directed graph and transition times that may have multimodal distributions. The traffic patterns and the network topology may be changing in the dynamic environment. We propose a Monte Carlo Expectation-Maximization algorithm-based continuous learning mechanism to capture the latent dynamically changing characteristics of the network topology. In the experiments, a nine-camera network with twenty-five nodes (at the lowest level is analyzed both in simulation and in real-life experiments and compared with previous approaches.

  20. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV

    Directory of Open Access Journals (Sweden)

    Huang Shyh-Fang

    2012-01-01

    Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.

  1. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  2. Home video monitoring system for neurodegenerative diseases based on commercial HD cameras

    NARCIS (Netherlands)

    Abramiuc, B.; Zinger, S.; De With, P.H.N.; De Vries-Farrouh, N.; Van Gilst, M.M.; Bloem, B.; Overeem, S.

    2016-01-01

    Neurodegenerative disease (ND) is an umbrella term for chronic disorders that are characterized by severe joint cognitive-motor impairments, which are difficult to evaluate on a frequent basis. HD cameras in the home environment could extend and enhance the diagnosis process and could lead to better

  3. Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Science.gov (United States)

    Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...

  4. Lights, Camera, Action: Facilitating the Design and Production of Effective Instructional Videos

    Science.gov (United States)

    Di Paolo, Terry; Wakefield, Jenny S.; Mills, Leila A.; Baker, Laura

    2017-01-01

    This paper outlines a rudimentary process intended to guide faculty in K-12 and higher education through the steps involved to produce video for their classes. The process comprises four steps: planning, development, delivery and reflection. Each step is infused with instructional design information intended to support the collaboration between…

  5. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  6. Improved embedded non-linear processing of video for camera surveillance

    NARCIS (Netherlands)

    Cvetkovic, S.D.; With, de P.H.N.

    2009-01-01

    For a real time imaging in surveillance applications, image fidelity is of primary importance to ensure customer confidence. The fidelity is obtained amongst others via dynamic range expansion and video signal enhancement. The dynamic range of the signal needs adaptation, because the sensor signal

  7. Broadcast court-net sports video analysis using fast 3-D camera modeling

    NARCIS (Netherlands)

    Han, Jungong; Farin, D.S.; With, de P.H.N.

    2008-01-01

    This paper addresses the automatic analysis of court-net sports video content. We extract information about the players, the playing-field in a bottom-up way until we reach scene-level semantic concepts. Each part of our framework is general, so that the system is applicable to several kinds of

  8. A quality control atlas for scintillation camera systems

    International Nuclear Information System (INIS)

    Busemann Sokole, E.; Graham, L.S.; Todd-Pokropek, A.; Wegst, A.; Robilotta, C.C.

    2002-01-01

    Full text: The accurate interpretation of quality control and clinical nuclear medicine image data is coupled to an understanding of image patterns and quantitative results. Understanding is gained by learning from different examples, and knowledge of underlying principles of image production. An Atlas of examples has been created to assist with interpreting quality control tests and recognizing artifacts in clinical examples. The project was initiated and supported by the International Atomic Energy Agency (IAEA). The Atlas was developed and written by Busemann Sokole from image examples submitted from nuclear medicine users from around the world. The descriptive text was written in a consistent format to accompany each image or image set. Each example in the atlas finally consisted of the images; a brief description of the data acquisition, radionuclide/radiopharmaceutical, specific circumstances under which the image was produced; results describing the images and subsequent conclusions; comments, where appropriate, giving guidelines for follow-up strategies and trouble shooting; and occasional literature references. Hardcopy images required digitizing into JPEG format for inclusion into a digital document. Where possible, an example was contained on one page. The atlas was reviewed by an international group of experts. A total of about 250 examples were compiled into 6 sections: planar, SPECT, whole body, camera/computer interface, environment/radioactivity, and display/hardcopy. Subtle loss of image quality may be difficult to detect. SPECT examples, therefore, include simulations demonstrating effects of deterioration in camera performance (e.g. center-of-rotation offset, non-uniformity) or suboptimal clinical performance. The atlas includes normal results, results from poor adjustment of the camera system, poor results obtained at acceptance testing, artifacts due to system malfunction, and artifacts due to environmental situations. Some image patterns are

  9. Utilizing social media and video games to control #DIY microscopes

    Directory of Open Access Journals (Sweden)

    Maxime Leblanc-Latour

    2017-12-01

    Full Text Available Open-source lab equipment is becoming more widespread with the popularization of fabrication tools such as 3D printers, laser cutters, CNC machines, open source microcontrollers and open source software. Although many pieces of common laboratory equipment have been developed, software control of these items is sometimes lacking. Specifically, control software that can be easily implemented and enable user-input and control over multiple platforms (PC, smartphone, web, etc.. The aim of this proof-of principle study was to develop and implement software for the control of a low-cost, 3D printed microscope. Here, we present two approaches which enable microscope control by exploiting the functionality of the social media platform Twitter or player actions inside of the videogame Minecraft. The microscope was constructed from a modified web-camera and implemented on a Raspberry Pi computer. Three aspects of microscope control were tested, including single image capture, focus control and time-lapse imaging. The Twitter embodiment enabled users to send ‘tweets’ directly to the microscope. Image data acquired by the microscope was then returned to the user through a Twitter reply and stored permanently on the photo-sharing platform Flickr, along with any relevant metadata. Local control of the microscope was also implemented by utilizing the video game Minecraft, in situations where Internet connectivity is not present or stable. A virtual laboratory was constructed inside the Minecraft world and player actions inside the laboratory were linked to specific microscope functions. Here, we present the methodology and results of these experiments and discuss possible limitations and future extensions of this work.

  10. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  11. Forward rectification: spatial image normalization for a video from a forward facing vehicle camera

    Science.gov (United States)

    Prun, Viktor; Polevoy, Dmitri; Postnikov, Vassiliy

    2017-03-01

    The work in this paper is focused around visual ADAS (Advanced Driver Assistance Systems). We introduce forward rectification - a technique for making computer vision algorithms more robust against camera mount point and mount angles. Using the technique can increase the quality of recognition as well as lower the dimensionality for algorithm invariance, making it possible to apply simpler affine-invariant algorithms for applications that require projective invariance. Providing useful results this rectification requires thorough calibration of the camera, which can be done automatically or semi-automatically. The technique is of general nature and can be applied to different algorithms, such as pattern matching detectors, convolutional neural networks. The applicability of the technique is demonstrated on HOG-based car detector detection rate.

  12. Surgical video recording with a modified GoPro Hero 4 camera

    OpenAIRE

    Lin LK

    2016-01-01

    Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Me...

  13. Integrated multi sensors and camera video sequence application for performance monitoring in archery

    Science.gov (United States)

    Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali

    2018-03-01

    This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.

  14. Observations of sea-ice conditions in the Antarctic coastal region using ship-board video cameras

    Directory of Open Access Journals (Sweden)

    Haruhito Shimoda

    1997-03-01

    Full Text Available During the 30th, 31st, and 32nd Japanese Antarctic Research Expeditions (JARE-30,JARE-31,and JARE-32, sea-ice conditions were recorded by video camera on board the SHIRASE. Then, the sea-ice images were used to estimate compactness and thickness quantitatively. Analyzed legs are those toward Breid Bay and from Breid Bay to Syowa Station during JARE-30 and JARE-31,and those toward the Prince Olav Coast, from the Prince Olav Coast to Breid Bay, and from Breid Bay to Syowa Station during JARE-32. The results show yearly variations of ice compactness and thickness, latitudinal variations of thickness, and differences in thickness histograms between JARE-30 and JARE-32 in Lutzow-Holm Bay. Albedo values were measured simultaneously by a shortwave radiometer. These values are proportional to those of ice compactness. Finally, we examined the relationship between ice compactness and vertical gradient of air temperature above sea ice.

  15. Preliminary condensation pool experiments with steam using DN80 and DN100 blowdown pipes[VIDEO CAMERAS

    Energy Technology Data Exchange (ETDEWEB)

    Laine, J.; Puustinen, M. [Lappeenranta University of Technology (Finland)

    2004-03-01

    The report summarizes the results of the preliminary steam blowdown experiments. Altogether eight experiment series, each consisting of several steam blows, were carried out in autumn 2003 with a scaled-down condensation pool test rig designed and constructed at Lappeenranta University of Technology. The main purpose of the experiments was to evaluate the capabilities of the test rig and the needs for measurement and visualization devices. The experiments showed that a high-speed video camera is essential for visual observation due to the rapid condensation of steam bubbles. Furthermore, the maximum measurement frequency of the current combination of instrumentation and data acquisition system is inadequate for the actual steam tests in 2004. (au)

  16. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  17. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  18. Quality control of the gamma camera/computer interface

    International Nuclear Information System (INIS)

    Busemann-Sokole, E.

    1983-01-01

    Reporting on the conference mentioned, the author indicates that technical inspection of the gamma camera and the attached computer each by themselves is not sufficient. The parts of the interface and the hardware or software can contain sources of error. In order to obtain the best diagnostic image a number of control measurements are recommended dealing with image intensifying, intensifier offset, linearity of transformation, exclusion of 'data drop' or 'bit drop', 2-pulse timing, correct response with different counting rates, and response to triggers (electrocardiogram). The last and most important recommendation is to record in writing particulars of each inspection and control measurement, particulars and solutions of problems and modifications in hardware and software. (Auth.)

  19. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  20. A Refrigerated Web Camera for Photogrammetric Video Measurement inside Biomass Boilers and Combustion Analysis

    Directory of Open Access Journals (Sweden)

    Enrique Granada

    2011-01-01

    Full Text Available This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes.

  1. A refrigerated web camera for photogrammetric video measurement inside biomass boilers and combustion analysis.

    Science.gov (United States)

    Porteiro, Jacobo; Riveiro, Belén; Granada, Enrique; Armesto, Julia; Eguía, Pablo; Collazo, Joaquín

    2011-01-01

    This paper describes a prototype instrumentation system for photogrammetric measuring of bed and ash layers, as well as for flying particle detection and pursuit using a single device (CCD) web camera. The system was designed to obtain images of the combustion process in the interior of a domestic boiler. It includes a cooling system, needed because of the high temperatures in the combustion chamber of the boiler. The cooling system was designed using CFD simulations to ensure effectiveness. This method allows more complete and real-time monitoring of the combustion process taking place inside a boiler. The information gained from this system may facilitate the optimisation of boiler processes.

  2. Using a thermistor flowmeter with attached video camera for monitoring sponge excurrent speed and oscular behaviour

    DEFF Research Database (Denmark)

    Strehlow, Brian W.; Jorgensen, Damien; Webster, Nicole S.

    2016-01-01

    A digital, four-channel thermistor flowmeter integrated with time-lapse cameras was developed as an experimental tool for measuring pumping rates in marine sponges, particularly those with small excurrent openings (oscula). Combining flowmeters with time-lapse imagery yielded valuable insights...... in pumping activity and osculum contraction were also observed, with sponges increasing their pumping activity to peak at midday and decreasing pumping and contracting oscula at night. Short-term elevation of the suspended sediment concentration (SSC) within the seawater initially decreased pumping rates...

  3. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves...

  4. Realization of a gamma emission tomography by a servo-controlled camera and bed

    International Nuclear Information System (INIS)

    Parmentier, M.; Gunzman, D.; Bidet, R.

    1979-01-01

    A gamma-camera and a whole-body bed were connected to a minicomputer which controlled automatically their movements. By combining horizontal displacement of the bed with vertical displacement and rotation of the camera we were able to obtain the equivalent of camera rotation around the bed. This method provides an inexpensive way of realizing gamma emission tomography [fr

  5. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  6. Social interactions of juvenile brown boobies at sea as observed with animal-borne video cameras.

    Directory of Open Access Journals (Sweden)

    Ken Yoda

    Full Text Available While social interactions play a crucial role on the development of young individuals, those of highly mobile juvenile birds in inaccessible environments are difficult to observe. In this study, we deployed miniaturised video recorders on juvenile brown boobies Sula leucogaster, which had been hand-fed beginning a few days after hatching, to examine how social interactions between tagged juveniles and other birds affected their flight and foraging behaviour. Juveniles flew longer with congeners, especially with adult birds, than solitarily. In addition, approximately 40% of foraging occurred close to aggregations of congeners and other species. Young seabirds voluntarily followed other birds, which may directly enhance their foraging success and improve foraging and flying skills during their developmental stage, or both.

  7. Effectance and control as determinants of video game enjoyment

    NARCIS (Netherlands)

    Klimmt, C.; Hartmann, T.; Frey, A.

    2007-01-01

    This article explores video game enjoyment originated by games' key characteristic, interactivity. An online experiment (N = 500) tested experiences of effectance (perceived influence on the game world) and of being in control as mechanisms that link interactivity to enjoyment. A video game was

  8. Quality control of scintillation cameras (planar and SPECT)

    International Nuclear Information System (INIS)

    Shaekhoon, E.S.

    2008-01-01

    Regular quality control is one of the corner stones of nuclear medicine and a prerequisite for adequate diagnostic imaging. Many papers have been published on quality control of planar and SPECT imaging system up to now, however only minor attenuation has been given to the assessment of the performance of imaging systems. In this research we are going to discuss a comprehensive set of test procedures including regular quality control. Our purpose is to go through analysis of the methods and results then to test our hypothesis which state that there is strong relationship between regular and proper evaluation of quality control and the continuity of better medical services in nuclear medicine department. The selection of the tests is discussed and the tests are described, then results are presented. In addition action thresholds are proposed. The quality control tests can be applied to systems with either a moving detector or a moving image table, and to both detector with a large field of view and detectors with a small field of view. The tests presented on this research do not required special phantoms or sources other than those used for quality control of stationary gamma camera and SPECT. They can be applied for acceptance testing and for performance testing in a regular quality assurance program. The data has been evaluated based on me diso software in comparing with IAEA expert software and system specification within the reference values. Our final results confirm our hypothesis, there are some comments about the characteristics and performance of this system that being observed and solved, then a departmental protocol for routine quality control (Q.C) has being established.(Author)

  9. Effectance and control as determinants of video game enjoyment.

    Science.gov (United States)

    Klimmt, Christoph; Hartmann, Tilo; Frey, Andreas

    2007-12-01

    This article explores video game enjoyment originated by games' key characteristic, interactivity. An online experiment (N=500) tested experiences of effectance (perceived influence on the game world) and of being in control as mechanisms that link interactivity to enjoyment. A video game was manipulated to either allow normal play, reduce perceived effectance, or reduce perceived control. Enjoyment ratings suggest that effectance is an important factor in video game enjoyment but that the relationship between control of the game situation and enjoyment is more complex.

  10. Illusory control, gambling, and video gaming: an investigation of regular gamblers and video game players.

    Science.gov (United States)

    King, Daniel L; Ejova, Anastasia; Delfabbro, Paul H

    2012-09-01

    There is a paucity of empirical research examining the possible association between gambling and video game play. In two studies, we examined the association between video game playing, erroneous gambling cognitions, and risky gambling behaviour. One hundred and fifteen participants, including 65 electronic gambling machine (EGM) players and 50 regular video game players, were administered a questionnaire that examined video game play, gambling involvement, problem gambling, and beliefs about gambling. We then assessed each groups' performance on a computerised gambling task that involved real money. A post-game survey examined perceptions of the skill and chance involved in the gambling task. The results showed that video game playing itself was not significantly associated with gambling involvement or problem gambling status. However, among those persons who both gambled and played video games, video game playing was uniquely and significantly positively associated with the perception of direct control over chance-based gambling events. Further research is needed to better understand the nature of this association, as it may assist in understanding the impact of emerging digital gambling technologies.

  11. Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea: video evidence from animal-borne cameras.

    Directory of Open Access Journals (Sweden)

    Susan G Heaslip

    Full Text Available The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate correlate with the daytime foraging behavior of leatherbacks (n = 19 in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08-3:38 h, and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata was the dominant prey (83-100%, but moon jellyfish (Aurelia aurita were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models. Handling time increased with prey size regardless of prey species (p = 0.0001. Estimates of energy intake averaged 66,018 kJ • d(-1 but were as high as 167,797 kJ • d(-1 corresponding to turtles consuming an average of 330 kg wet mass • d(-1 (up to 840 kg • d(-1 or approximately 261 (up to 664 jellyfish • d(-1. Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass • d(-1 equating to an average energy intake of 3-7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to

  12. Localization of viewpoint of a video camera in a partially modeled universe

    International Nuclear Information System (INIS)

    Awanzino, C.

    2000-01-01

    Interventions in reprocessing cells in nuclear plants are performed by tele-operated robots. These reprocessing cells are essentially constituted of repetitive structures of similar pipes. In addition, the pipes in the cell are metallic. Thus, the pipe illumination by a light source brings areas of high light intensity, called highlights. Highlights often cause image processing failures, which lead to image misinterpretation. Thus, it is very difficult for the operator to steer itself. Our work aims at providing a system able to localize the robot inside the cell at any time to help the operator. A database of the cell is provided, but this database may be incomplete or unprecise. At first, we proposed a polarization based system, which exploits highlights to extract the axes of the pipes, by discriminating the scene from the background. But, when highlights are missing, the process may fail. Then, in a second part, we proposed a localization method using a correlation based assignment process. The robot localization is performed by minimizing a double criteria. The first part of this criteria translates into a good projection of the textured model in the image. The second one translates into the fact that the system composed of the scene and two successive images have to satisfy the epi-polar constraint. The minimization criteria is symmetric in relation to time in order to not perturb the localization process by previous localization errors. Indeed, the method calls into question the previous localization, in relation to the new image, to localize at best the new camera attitude. In order to validate the method, some experiments have been presented, but more general ones have to be performed. (author) [fr

  13. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  14. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video-content-analysis tasks in large-scale ad-hoc networks

    NARCIS (Netherlands)

    Hollander R.J.M. den; Bouma, H.; Rest, J.H.C. van; Hove, J.M. ten; Haar, F.B. ter; Burghouts, G.J.

    2017-01-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS

  15. Standardization of the intrinsic uniformity control of the gamma cameras

    International Nuclear Information System (INIS)

    Solsona Harster, Lluis; Llopis Gonzalez, David; Pavia Segura, Javier

    2001-01-01

    Objective: To verify the Intrinsic Uniformity (Iu) results using different acquisition parameters in the weekly gamma camera Quality Control (Qc). Material And Methods: We made 4 experiments using Tc99 sources and modifying the orientation, distance, activity an volume parameters of a source in ten detectors with I Na photomultipliers applying the following acquisition conditions: 4000 Kc, the source 2 m far from the geometrical centre of the detectors, 0.1 ml into 1 ml syringe, and 150 Tc99m ?Ci. Results: We found better results when the distance between detector/source is getting longer, but the better point we found between 1,5 and 2 m. We also found necessary the collimator position was parallel respect to the geometrical centre field of view, because a little deviation of only two degrees can offer a bad result between +0.5%. We study the dose that we should use, and the results show us that better results are not in the highest or smallest values of activity into the source. In volume parameters, we can see that if we use a source highest than 1 ml we obtained better results. Conclusion: Following our results in the variation of IU values as for as the distance, rotation detector/source, dose and source activity, we recommend to perform this QC applying NEMA rules in same conditions every week and using the different parameters of our study to obtain better IU (Au)

  16. Development of a camera casing suited for cryogenic and vacuum applications

    Science.gov (United States)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  17. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    Science.gov (United States)

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p competency. Prognostic study, level II.

  18. Learning, attentional control, and action video games.

    Science.gov (United States)

    Green, C S; Bavelier, D

    2012-03-20

    While humans have an incredible capacity to acquire new skills and alter their behavior as a result of experience, enhancements in performance are typically narrowly restricted to the parameters of the training environment, with little evidence of generalization to different, even seemingly highly related, tasks. Such specificity is a major obstacle for the development of many real-world training or rehabilitation paradigms, which necessarily seek to promote more general learning. In contrast to these typical findings, research over the past decade has shown that training on 'action video games' produces learning that transfers well beyond the training task. This has led to substantial interest among those interested in rehabilitation, for instance, after stroke or to treat amblyopia, or training for various precision-demanding jobs, for instance, endoscopic surgery or piloting unmanned aerial drones. Although the predominant focus of the field has been on outlining the breadth of possible action-game-related enhancements, recent work has concentrated on uncovering the mechanisms that underlie these changes, an important first step towards the goal of designing and using video games for more definite purposes. Game playing may not convey an immediate advantage on new tasks (increased performance from the very first trial), but rather the true effect of action video game playing may be to enhance the ability to learn new tasks. Such a mechanism may serve as a signature of training regimens that are likely to produce transfer of learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Learning, attentional control and action video games

    Science.gov (United States)

    Green, C.S.; Bavelier, D.

    2012-01-01

    While humans have an incredible capacity to acquire new skills and alter their behavior as a result of experience, enhancements in performance are typically narrowly restricted to the parameters of the training environment, with little evidence of generalization to different, even seemingly highly related, tasks. Such specificity is a major obstacle for the development of many real-world training or rehabilitation paradigms, which necessarily seek to promote more general learning. In contrast to these typical findings, research over the past decade has shown that training on ‘action video games’ produces learning that transfers well beyond the training task. This has led to substantial interest among those interested in rehabilitation, for instance, after stroke or to treat amblyopia, or training for various precision-demanding jobs, for instance, endoscopic surgery or piloting unmanned aerial drones. Although the predominant focus of the field has been on outlining the breadth of possible action-game-related enhancements, recent work has concentrated on uncovering the mechanisms that underlie these changes, an important first step towards the goal of designing and using video games for more definite purposes. Game playing may not convey an immediate advantage on new tasks (increased performance from the very first trial), but rather the true effect of action video game playing may be to enhance the ability to learn new tasks. Such a mechanism may serve as a signature of training regimens that are likely to produce transfer of learning. PMID:22440805

  20. Video flow active control by means of adaptive shifted foveal geometries

    Science.gov (United States)

    Urdiales, Cristina; Rodriguez, Juan A.; Bandera, Antonio J.; Sandoval, Francisco

    2000-10-01

    This paper presents a control mechanism for video transmission that relies on transmitting non-uniform resolution images depending on the delay of the communication channel. These images are built in an active way to keep the areas of interest of the image at the highest resolution available. In order to shift the area of high resolution over the image and to achieve a data structure easy to process by using conventional algorithms, a shifted fovea multi resolution geometry of adaptive size is used. Besides, if delays are nevertheless too high, the different areas of resolution of the image can be transmitted at different rates. A functional system has been developed for corridor surveillance with static cameras. Tests with real video images have proven that the method allows an almost constant rate of images per second as long as the channel is not collapsed.

  1. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    Science.gov (United States)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  2. NEMA NU-1 2007 based and independent quality control software for gamma cameras and SPECT

    International Nuclear Information System (INIS)

    Vickery, A; Joergensen, T; De Nijs, R

    2011-01-01

    A thorough quality assurance of gamma and SPECT cameras requires a careful handling of the measured quality control (QC) data. Most gamma camera manufacturers provide the users with camera specific QC Software. This QC software is indeed a useful tool for the following of day-to-day performance of a single camera. However, when it comes to objective performance comparison of different gamma cameras and a deeper understanding of the calculated numbers, the use of camera specific QC software without access to the source code is rather avoided. Calculations and definitions might differ, and manufacturer independent standardized results are preferred. Based upon the NEMA Standards Publication NU 1-2007, we have developed a suite of easy-to-use data handling software for processing acquired QC data providing the user with instructive images and text files with the results.

  3. Low Cost Wireless Network Camera Sensors for Traffic Monitoring

    Science.gov (United States)

    2012-07-01

    Many freeways and arterials in major cities in Texas are presently equipped with video detection cameras to : collect data and help in traffic/incident management. In this study, carefully controlled experiments determined : the throughput and output...

  4. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera

    National Research Council Canada - National Science Library

    Chen, J; Dixon, W. E; Dawson, D. M; Chitrakaran, V. K

    2004-01-01

    In this paper, a visual servo tracking controller for a wheeled mobile robot (WMR) is developed that utilizes feedback from a monocular camera system that is mounted with a fixed position and orientation...

  5. The photothermal camera - a new non destructive inspection tool; La camera photothermique - une nouvelle methode de controle non destructif

    Energy Technology Data Exchange (ETDEWEB)

    Piriou, M. [AREVA NP Centre Technique SFE - Zone Industrielle et Portuaire Sud - BP13 - 71380 Saint Marcel (France)

    2007-07-01

    The Photothermal Camera, developed by the Non-Destructive Inspection Department at AREVA NP's Technical Center, is a device created to replace penetrant testing, a method whose drawbacks include environmental pollutants, industrial complexity and potential operator exposure. We have already seen how the Photothermal Camera can work alongside or instead of conventional surface inspection techniques such as penetrant, magnetic particle or eddy currents. With it, users can detect without any surface contact ligament defects or openings measuring just a few microns on rough oxidized, machined or welded metal parts. It also enables them to work on geometrically varied surfaces, hot parts or insulating (dielectric) materials without interference from the magnetic properties of the inspected part. The Photothermal Camera method has already been used for in situ inspections of tube/plate welds on an intermediate heat exchanger of the Phenix fast reactor. It also replaced the penetrant method for weld inspections on the ITER vacuum chamber, for weld crack detection on vessel head adapter J-welds, and for detecting cracks brought on by heat crazing. What sets this innovative method apart from others is its ability to operate at distances of up to two meters from the inspected part, as well as its remote control functionality at distances of up to 15 meters (or more via Ethernet), and its emissions-free environmental cleanliness. These make it a true alternative to penetrant testing, to the benefit of operator and environmental protection. (author) [French] La Camera Photothermique, developpee par le departement des Examens Non Destructifs du Centre Technique de AREVA NP, est un equipement destine a remplacer le ressuage, source de pollution pour l'environnement, de complexite pour l'industrialisation et eventuellement de dosimetrie pour les operateurs. Il a ete demontre que la Camera Photothermique peut etre utilisee en complement ou en remplacement des

  6. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  7. Video Toroid Cavity Imager

    Energy Technology Data Exchange (ETDEWEB)

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  8. General Attitude Control Algorithm for Spacecraft Equipped with Star Camera and Reaction Wheels

    DEFF Research Database (Denmark)

    Wisniewski, Rafal; Kulczycki, P.

    A configuration consisting of a star camera, four reaction wheels and magnetorquers for momentum unloading has become standard for many spacecraft missions. This popularity has motivated numerous agencies and private companies to initiate work on the design of an imbedded attitude control system...... realized on an integrated circuit. This paper considers two issues: slew maneuver with a feature of avoiding direct exposure of the camera's CCD chip to the Sun %, three-axis attitude control and optimal control torque distribution in a reaction wheel assembly. The attitude controller is synthesized...

  9. Camera System Deployment for Speeding Control in Australia

    Directory of Open Access Journals (Sweden)

    Zuhair Ebrahim

    2014-12-01

    Full Text Available In Australia, the Auditor-General plays the role of checking on system fiscal efficiency, performance and effective communications between safety professionals and the public road users. The focus of this paper is to evaluate the possibility of public approval of the information that is to be released, e.g. camera strategic initiatives assessed by through mail-out questionnaires. Two visual-and- policy related attributes were investigated in these questionnaires. Each attribute had 5 initiatives. A multi-logistic regression is performed on the approval level of the drivers for the strategic initiative of running a speed-awareness course. This initiative is determined to be statistically significant using independent variables age, years of experience, status, gender, and the driver environment. Our analysis shows that the driver environment/background is found to be a significant independent variable for approving speed awareness courses. The road users from non-industrial areas are more likely to approve the idea of speed awareness courses than road users from industrial areas. They also welcome tougher demerit rules and the police enforcement. Our study suggests the speed awareness course, an educational initiative, should incorporate the tougher demerit rules to change the repetitive offender's driving behaviour. It is foreseeable that once these drivers are enrolled into the course, safer driving practices would be achieved for mitigating dangers, risk and trauma as the result of speeding. Our study may benefit professionals involved with improving traffic safety such as those in Asia, Africa, the Middle East and the Arab gulf countries particularly the kingdom of Saudi Arabia where a high number of fatalities and serious injuries involved speeding. Our study confirms that positive, transparent and satisfying initiative should be executed with care to maintain sustainable and safer roads for enhancing national partnership between road users

  10. Assessment of Machine Learning Algorithms for Automatic Benthic Cover Monitoring and Mapping Using Towed Underwater Video Camera and High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Hassan Mohamed

    2018-05-01

    Full Text Available Benthic habitat monitoring is essential for many applications involving biodiversity, marine resource management, and the estimation of variations over temporal and spatial scales. Nevertheless, both automatic and semi-automatic analytical methods for deriving ecologically significant information from towed camera images are still limited. This study proposes a methodology that enables a high-resolution towed camera with a Global Navigation Satellite System (GNSS to adaptively monitor and map benthic habitats. First, the towed camera finishes a pre-programmed initial survey to collect benthic habitat videos, which can then be converted to geo-located benthic habitat images. Second, an expert labels a number of benthic habitat images to class habitats manually. Third, attributes for categorizing these images are extracted automatically using the Bag of Features (BOF algorithm. Fourth, benthic cover categories are detected automatically using Weighted Majority Voting (WMV ensembles for Support Vector Machines (SVM, K-Nearest Neighbor (K-NN, and Bagging (BAG classifiers. Fifth, WMV-trained ensembles can be used for categorizing more benthic cover images automatically. Finally, correctly categorized geo-located images can provide ground truth samples for benthic cover mapping using high-resolution satellite imagery. The proposed methodology was tested over Shiraho, Ishigaki Island, Japan, a heterogeneous coastal area. The WMV ensemble exhibited 89% overall accuracy for categorizing corals, sediments, seagrass, and algae species. Furthermore, the same WMV ensemble produced a benthic cover map using a Quickbird satellite image with 92.7% overall accuracy.

  11. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    Science.gov (United States)

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  12. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  13. Adaptive Neural-Sliding Mode Control of Active Suspension System for Camera Stabilization

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-01-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to the unintentional vibrations caused by road roughness. This paper presents a novel adaptive neural network based on sliding mode control strategy to stabilize the image captured area of the camera. The purpose is to suppress vertical displacement of sprung mass with the application of active suspension system. Since the active suspension system has nonlinear and time varying characteristics, adaptive neural network (ANN is proposed to make the controller robustness against systematic uncertainties, which release the model-based requirement of the sliding model control, and the weighting matrix is adjusted online according to Lyapunov function. The control system consists of two loops. The outer loop is a position controller designed with sliding mode strategy, while the PID controller in the inner loop is to track the desired force. The closed loop stability and asymptotic convergence performance can be guaranteed on the basis of the Lyapunov stability theory. Finally, the simulation results show that the employed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  14. Changes are detected - cameras and video systems are monitoring the plant site, only rarely giving false alarm

    International Nuclear Information System (INIS)

    Zeissler, H.

    1988-01-01

    The main purpose of automatic data acquisition and processing for monitoring goals is to relieve the security personnel from monotonous observation tasks. The novel video systems can be programmed to detect moving target alarm signals, or accept alarm-suppressing image changes. This allows an intelligent alarm evaluation for physical protection in industry, differentiating between real and false alarm signals. (orig.) [de

  15. Use of an unmanned aerial vehicle-mounted video camera to assess feeding behavior of Raramuri Criollo cows

    Science.gov (United States)

    We determined the feasibility of using unmanned aerial vehicle (UAV) video monitoring to predict intake of discrete food items of rangeland-raised Raramuri Criollo non-nursing beef cows. Thirty-five cows were released into a 405-m2 rectangular dry lot, either in pairs (pilot tests) or individually (...

  16. The Effect of Smartphone Video Camera as a Tool to Create Gigital Stories for English Learning Purposes

    Science.gov (United States)

    Gromik, Nicolas A.

    2015-01-01

    The integration of smartphones in the language learning environment is gaining research interest. However, using a smartphone to learn to speak spontaneously has received little attention. The emergence of smartphone technology and its video recording feature are recognised as suitable learning tools. This paper reports on a case study conducted…

  17. What Does the Camera Communicate? An Inquiry into the Politics and Possibilities of Video Research on Learning

    Science.gov (United States)

    Vossoughi, Shirin; Escudé, Meg

    2016-01-01

    This piece explores the politics and possibilities of video research on learning in educational settings. The authors (a research-practice team) argue that changing the stance of inquiry from "surveillance" to "relationship" is an ongoing and contingent practice that involves pedagogical, political, and ethical choices on the…

  18. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  19. Design and performance of an acquisition and control system for a positron camera with novel detectors

    International Nuclear Information System (INIS)

    Symonds-Tayler, J.R.N.; Reader, A.J.; Flower, M.A.

    1996-01-01

    A Sun-based data acquisition and control (DAQ) system has been designed for PETRRA, a whole-body positron camera using large-area BaF 2 -TMAE detectors. The DAQ system uses a high-speed digital I/O card (S16D) installed on the S-bus of a SPARC10 and a specially-designed Positron Camera Interface (PCI), which also controls both the gantry and horizontal couch motion. Data in the form of different types of 6-byte packets are acquired in list mode. Tests with a signal generator show that the DAQ system should be able to cater for coincidence count-rates up to 100 kcps. The predicted count loss due to the DAQ system is ∼13% at this count rate, provided asynchronous-read based software is used. The list-mode data acquisition system designed for PETRRA could be adapted for other 3D PET cameras with similar data rates

  20. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  1. Slew Maneuver Control for Spacecraft Equipped with Star Camera and Reaction Wheels

    DEFF Research Database (Denmark)

    Wisniewski, Rafal; Kulczycki, P.

    2005-01-01

    A configuration consisting of a star camera, four reaction wheels and magnetorquers for momentum unloading has become standard for many spacecraft missions. This popularity has motivated numerous agencies and private companies to initiate work on the design of an imbedded attitude control system...... realized on an integrated circuit. This paper provides an easily implementable control algorithm for this type of configuration. The paper considers two issues: slew maneuver with a feature of avoiding direct exposure of the camera's CCD chip to the Sun %, three-axis attitude control and optimal control...... torque distribution in a reaction wheel assembly. The attitude controller is synthesized applying the energy shaping technique, where the desired potential function is carefully designed using a physical insight into the nature of the problem. The system stability is thoroughly analyzed and the control...

  2. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  3. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  4. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  5. Quality control of radiosurgery: dosimetry with micro camera in spherical mannequin

    International Nuclear Information System (INIS)

    Casado Villalon, F. J.; Navarro Guirado, F.; Garci Pareja, S.; Benitez Villegas, E. M.; Galan Montenegro, P.; Moreno Saiz, C.

    2013-01-01

    The dosimetry of small field is part of quality control in the treatment of cranial radiosurgery. In this work the results of absorbed dose in the isocenter, Planner, with those obtained from are compared experimentally with a micro-camera into an spherical mannequin. (Author)

  6. Video game training enhances cognitive control in older adults.

    Science.gov (United States)

    Anguera, J A; Boccanfuso, J; Rintoul, J L; Al-Hashimi, O; Faraji, F; Janowich, J; Kong, E; Larraburo, Y; Rolle, C; Johnston, E; Gazzaley, A

    2013-09-05

    Cognitive control is defined by a set of neural processes that allow us to interact with our complex environment in a goal-directed manner. Humans regularly challenge these control processes when attempting to simultaneously accomplish multiple goals (multitasking), generating interference as the result of fundamental information processing limitations. It is clear that multitasking behaviour has become ubiquitous in today's technologically dense world, and substantial evidence has accrued regarding multitasking difficulties and cognitive control deficits in our ageing population. Here we show that multitasking performance, as assessed with a custom-designed three-dimensional video game (NeuroRacer), exhibits a linear age-related decline from 20 to 79 years of age. By playing an adaptive version of NeuroRacer in multitasking training mode, older adults (60 to 85 years old) reduced multitasking costs compared to both an active control group and a no-contact control group, attaining levels beyond those achieved by untrained 20-year-old participants, with gains persisting for 6 months. Furthermore, age-related deficits in neural signatures of cognitive control, as measured with electroencephalography, were remediated by multitasking training (enhanced midline frontal theta power and frontal-posterior theta coherence). Critically, this training resulted in performance benefits that extended to untrained cognitive control abilities (enhanced sustained attention and working memory), with an increase in midline frontal theta power predicting the training-induced boost in sustained attention and preservation of multitasking improvement 6 months later. These findings highlight the robust plasticity of the prefrontal cognitive control system in the ageing brain, and provide the first evidence, to our knowledge, of how a custom-designed video game can be used to assess cognitive abilities across the lifespan, evaluate underlying neural mechanisms, and serve as a powerful tool

  7. Gamma camera computer system quality control for conventional and tomographic use

    International Nuclear Information System (INIS)

    Laird, E.E.; Allan, W.; Williams, E.D.

    1983-01-01

    The proposition that some of the proposed measurements of gamma camera performance parameters for routine quality control are redundant and that only the uniformity requires daily monitoring was examined. To test this proposition, measurements of gamma camera performance were carried out under normal operating conditions and also with the introduction of faults (offset window, offset PM tube). Results for the uniform flood field are presented for non-uniformity, intrinsic spatial resolution, linearity and relative system sensitivity. The response to introduced faults revealed that while the non-uniformity response pattern of the gamma camera was clearly affected, both measurements and qualitative indications of the other performance parameters did not necessarily show any deterioration. (U.K.)

  8. Camtracker: a new camera controlled high precision solar tracker system for FTIR-spectrometers

    Directory of Open Access Journals (Sweden)

    M. Gisi

    2011-01-01

    Full Text Available A new system to very precisely couple radiation of a moving source into a Fourier Transform Infrared (FTIR Spectrometer is presented. The Camtracker consists of a homemade altazimuthal solar tracker, a digital camera and a homemade program to process the camera data and to control the motion of the tracker. The key idea is to evaluate the image of the radiation source on the entrance field stop of the spectrometer. We prove that the system reaches tracking accuracies of about 10 arc s for a ground-based solar absorption FTIR spectrometer, which is significantly better than current solar trackers. Moreover, due to the incorporation of a camera, the new system allows to document residual pointing errors and to point onto the solar disk center even in case of variable intensity distributions across the source due to cirrus or haze.

  9. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  10. Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor

    Science.gov (United States)

    Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso

    2018-04-01

    Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.

  11. Video Enhancement and Dynamic Range Control of HDR Sequences for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Giovanni Ramponi

    2007-01-01

    Full Text Available CMOS video cameras with high dynamic range (HDR output are particularly suitable for driving assistance applications, where lighting conditions can strongly vary, going from direct sunlight to dark areas in tunnels. However, common visualization devices can only handle a low dynamic range, and thus a dynamic range reduction is needed. Many algorithms have been proposed in the literature to reduce the dynamic range of still pictures. Anyway, extending the available methods to video is not straightforward, due to the peculiar nature of video data. We propose an algorithm for both reducing the dynamic range of video sequences and enhancing its appearance, thus improving visual quality and reducing temporal artifacts. We also provide an optimized version of our algorithm for a viable hardware implementation on an FPGA. The feasibility of this implementation is demonstrated by means of a case study.

  12. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Wu, Shengli, E-mail: slwu@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Tian, Jinshou [Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Liu, Zhen [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Fang, Yuman [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Gao, Guilong; Liang, Lingliang [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Wen, Wenlong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-11-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility.

  13. Development of intelligent control system for X-ray streak camera in diagnostic instrument manipulator

    International Nuclear Information System (INIS)

    Pei, Chengquan; Wu, Shengli; Tian, Jinshou; Liu, Zhen; Fang, Yuman; Gao, Guilong; Liang, Lingliang; Wen, Wenlong

    2015-01-01

    An intelligent control system for an X ray streak camera in a diagnostic instrument manipulator (DIM) is proposed and implemented, which can control time delay, electric focusing, image gain adjustment, switch of sweep voltage, acquiring environment parameters etc. The system consists of 16 A/D converters and 16 D/A converters, a 32-channel general purpose input/output (GPIO) and two sensors. An isolated DC/DC converter with multi-outputs and a single mode fiber were adopted to reduce the interference generated by the common ground among the A/D, D/A and I/O. The software was designed using graphical programming language and can remotely access the corresponding instrument from a website. The entire intelligent control system can acquire the desirable data at a speed of 30 Mb/s and store it for later analysis. The intelligent system was implemented on a streak camera in a DIM and it shows a temporal resolution of 11.25 ps, spatial distortion of less than 10% and dynamic range of 279:1. The intelligent control system has been successfully used in a streak camera to verify the synchronization of multi-channel laser on the Inertial Confinement Fusion Facility

  14. Vibration control of a camera mount system for an unmanned aerial vehicle using piezostack actuators

    International Nuclear Information System (INIS)

    Oh, Jong-Seok; Choi, Seung-Bok; Han, Young-Min

    2011-01-01

    This work proposes an active mount for the camera systems of unmanned aerial vehicles (UAV) in order to control unwanted vibrations. An active actuator of the proposed mount is devised as an inertial type, in which a piezostack actuator is directly connected to the inertial mass. After evaluating the actuating force of the actuator, it is combined with the rubber element of the mount, whose natural frequency is determined based on the measured vibration characteristics of UAV. Based on the governing equations of motion of the active camera mount, a robust sliding mode controller (SMC) is then formulated with consideration of parameter uncertainties and hysteresis behavior of the actuator. Subsequently, vibration control performances of the proposed active mount are experimentally evaluated in the time and frequency domains. In addition, a full camera mount system of UAVs that is supported by four active mounts is considered and its vibration control performance is evaluated in the frequency domain using a hardware-in-the-loop simulation (HILS) method

  15. Ladder beam and camera video recording system for evaluating forelimb and hindlimb deficits after sensorimotor cortex injury in rats.

    Science.gov (United States)

    Soblosky, J S; Colgin, L L; Chorney-Lane, D; Davidson, J F; Carey, M E

    1997-12-30

    Hindlimb and forelimb deficits in rats caused by sensorimotor cortex lesions are frequently tested by using the narrow flat beam (hindlimb), the narrow pegged beam (hindlimb and forelimb) or the grid-walking (forelimb) tests. Although these are excellent tests, the narrow flat beam generates non-parametric data so that using more powerful parametric statistical analyses are prohibited. All these tests can be difficult to score if the rat is moving rapidly. Foot misplacements, especially on the grid-walking test, are indicative of an ongoing deficit, but have not been reliably and accurately described and quantified previously. In this paper we present an easy to construct and use horizontal ladder-beam with a camera system on rails which can be used to evaluate both hindlimb and forelimb deficits in a single test. By slow motion videotape playback we were able to quantify and demonstrate foot misplacements which go beyond the recovery period usually seen using more conventional measures (i.e. footslips and footfaults). This convenient system provides a rapid and reliable method for recording and evaluating rat performance on any type of beam and may be useful for measuring sensorimotor recovery following brain injury.

  16. Improved control of exogenous attention in action video game players

    Directory of Open Access Journals (Sweden)

    Matthew S Cain

    2014-02-01

    Full Text Available Action video game players have demonstrated a number of attentional advantages over non-players. Here, we propose that many of those benefits might be underpinned by improved control over exogenous (i.e., stimulus-driven attention. To test this we used an anti-cuing task, in which a sudden-onset cue indicated that the target would likely appear in a separate location on the opposite side of the fixation point. When the time between the cue onset and the target onset was short (40 ms, non-players (nVGPs showed a typical exogenous attention effect. Their response times were faster to targets presented at the cued (but less probable location compared with the opposite (more probable location. Video game players (VGPs, however, were less likely to have their attention drawn to the location of the cue. When the onset asynchrony was long (600 ms, VGPs and nVGPs were equally able to endogenously shift their attention to the likely (opposite target location. In order to rule out processing-speed differences as an explanation for this result, we also tested VGPs and nVGPs on an attentional blink task. In a version of the attentional blink task that minimized demands on task switching and iconic memory, VGPs and nVGPs did not differ in second target identification performance (i.e., VGPs had the same magnitude of attentional blink as nVGPs, suggesting that the anti-cuing results were due to flexible control over exogenous attention rather than to more general speed-of-processing differences.

  17. The multi-camera optical surveillance system (MOS)

    International Nuclear Information System (INIS)

    Otto, P.; Wagner, H.; Richter, B.; Gaertner, K.J.; Laszlo, G.; Neumann, G.

    1991-01-01

    The transition from film camera to video surveillance systems, in particular the implementation of high capacity multi-camera video systems, results in a large increase in the amount of recorded scenes. Consequently, there is a substantial increase in the manpower requirements for review. Moreover, modern microprocessor controlled equipment facilitates the collection of additional data associated with each scene. Both the scene and the annotated information have to be evaluated by the inspector. The design of video surveillance systems for safeguards necessarily has to account for both appropriate recording and reviewing techniques. An aspect of principal importance is that the video information is stored on tape. Under the German Support Programme to the Agency a technical concept has been developed which aims at optimizing the capabilities of a multi-camera optical surveillance (MOS) system including the reviewing technique. This concept is presented in the following paper including a discussion of reviewing and reliability

  18. Optimization of thermal waste treatment with the INSPECT system by camera based characteristic values and Fuzzy Control; Optimierung der thermischen Abfallbehandlung mit dem INSPECT-System durch kamerabasierte Kenngroessen und Fuzzy Control

    Energy Technology Data Exchange (ETDEWEB)

    Keller, H.B.; Matthes, J. [Forschungszentrum Karlsruhe GmbH, Eggenstein-Leopoldshafen (Germany). Inst. fuer Angewandte Informatik; Schoenecker, H.; Krakau, T. [ci-Tec GmbH, Karlsruhe (Germany)

    2007-07-01

    The application of modern procedures for control and regulation at industrial procedures requires the knowledge of important process variables which are not measurable on-line. Optical measurements by means of video cameras and infrared cameras enable a non-destructive observation of the combustion process in locally different developments. Thus, process variables can be determined by means of special software of image processing in real time. For grate firings and rotating kilns, INSPECT applications are developed. The characteristics computed thereby directly can be used by means of a Fuzzy control module or by external systems for the optimization of the firing regulation. The enormous potential of the optimization with the INSPECT system impressively is confirmed by the past installations and by the process improvements obtained thereby. The INSPECT system was developed in co-operation with Ci-Tec GmbH (Ratingen, Federal Republic of Germany) and marketed, maintained and further developed by Ci-Tec GmbH.

  19. Opto-mechanical design of the G-CLEF flexure control camera system

    Science.gov (United States)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  20. MAVIS: Mobile Acquisition and VISualization -\\ud a professional tool for video recording on a mobile platform

    OpenAIRE

    Watten, Phil; Gilardi, Marco; Holroyd, Patrick; Newbury, Paul

    2015-01-01

    Professional video recording is a complex process which often requires expensive cameras and large amounts of ancillary equipment.\\ud With the advancement of mobile technologies, cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera and are often used in professional productions.\\ud However, tools that allow professional users to access the information they need to control the technical ...

  1. Video game performances are preserved in ADHD children compared with controls.

    Science.gov (United States)

    Bioulac, Stéphanie; Lallemand, Stéphanie; Fabrigoule, Colette; Thoumy, Anne-Laure; Philip, Pierre; Bouvard, Manuel Pierre

    2014-08-01

    Although ADHD and excessive video game playing have received some attention, few studies have explored the performances of ADHD children when playing video games. The authors hypothesized that performances of ADHD children would be as good as those of control children in motivating video games tasks but not in the Continuous Performance Test II (CPT II). The sample consisted of 26 ADHD children and 16 control children. Performances of ADHD and control children were compared on three commercially available games, on the repetition of every game, and on the CPT II. ADHD children had lower performances on the CPT II than did controls, but they exhibited equivalent performances to controls when playing video games at both sessions and on all three games. When playing video games, ADHD children present no difference in inhibitory performances compared with control children. This demonstrates that cognitive difficulties in ADHD are task dependent. © 2012 SAGE Publications.

  2. Performance Analysis of Video PHY Controller Using Unidirection and Bi-directional IO Standard via 7 Series FPGA

    DEFF Research Database (Denmark)

    Das, Bhagwan; Abdullah, M F L; Hussain, Dil muhammed Akbar

    2017-01-01

    graphics consumes more power, this creates a need of designing the low power design for Video PHY controller. In this paper, the performance of Video PHY controller is analyzed by comparing the power consumption of unidirectional and bi-directional IO Standard over 7 series FPGA. It is determined...... that total on-chip power is reduced for unidirectional IO Standard based Video PHY controller compared to bidirectional IO Standard based Video PHY controller. The most significant achievement of this work is that it is concluded that unidirectional IO Standard based Video PHY controller consume least...... standby power compared to bidirectional IO Standard based Video PHY controller. It is defined that for 6 GHz operated frequency Video PHY controller, the 32% total on-chip power is reduced using unidirectional IO Standard based Video PHY controller is less compared to bidirectional IO Standard based Video...

  3. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  4. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy.

    Science.gov (United States)

    Barabas, Federico M; Masullo, Luciano A; Stefani, Fernando D

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  5. A simple and accurate method for the quality control of the I.I.-DR apparatus using the CCD camera

    International Nuclear Information System (INIS)

    Igarashi, Hitoshi; Shiraishi, Akihisa; Kuraishi, Masahiko

    2000-01-01

    With the advancing development of CCD cameras, the I.I.-DR apparatus has been introduced into the x-ray fluoroscopy television system. Consequently, quality control of the system has become a complicated task. We developed a simple, accurate method for quality control of the I.I.-DR apparatus using the CCD camera. Experiments were separately performed for the imager system [laser imager, DDX (dynamic digital x-ray system)] and the imaging system (I.I., ND-filter, IRIS, CCD camera). Quality control of the imager system was done by simply examining both input and output characteristics with a sliding pattern. Quality control of the imaging system was also conducted by estimating AVE (the average volume element), which was obtained using a phantom under the constant conditions. The results indicated that this simplified method is useful as a weekly quality control check of the I.I.-DR apparatus using the CCD camera. (author)

  6. Indirect iterative learning control for a discrete visual servo without a camera-robot model.

    Science.gov (United States)

    Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan

    2007-08-01

    This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.

  7. Automatic Camera Control System for a Distant Lecture with Videoing a Normal Classroom.

    Science.gov (United States)

    Suganuma, Akira; Nishigori, Shuichiro

    The growth of a communication network technology enables students to take part in a distant lecture. Although many lectures are conducted in universities by using Web contents, normal lectures using a blackboard are still held. The latter style lecture is good for a teacher's dynamic explanation. A way to modify it for a distant lecture is to…

  8. Survey of Current Status of Quality Control of Gamma Cameras in Republic of Korea

    International Nuclear Information System (INIS)

    Choe, Jae Gol; Joh, Cheol Woo

    2008-01-01

    It is widely recognized that good quality control (QC) program is essential for adequate imaging diagnosis using gamma camera. The purpose of this study is to survey the current status of QC of gamma cameras in Republic of Korea for implementing appropriate nationwide quality control guidelines and programs. A collection of data is done for personnel, equipment and appropriateness of each nuclear medicine imaging laboratory's' quality control practice. This survey is done by collection of formatted questionnaire by mails, e mails or interviews. We also reviewed the current recommendations concerning quality assurance by international societies. This survey revealed that practice of quality control is irregular and not satisfactory. The irregularity of the QC practice seems due partly to the lack of trained personnel, equipment, budget, time and hand-on guidelines. The implementation of QC program may cause additional burden to the hospitals, patients and nuclear medicine laboratories. However, the benefit of a good QC program is obvious that the hospitals can provide good quality nuclear medicine imaging studies to the patients. It is important to use least cumbersome QC protocol, to educate the nuclear medicine and hospital administrative personnel concerning QC, and to establish national QC guidelines to help each individual nuclear medicine laboratory

  9. Deflection control system for prestressed concrete bridges by CCD camera. CCD camera ni yoru prestressed concrete kyo no tawami kanri system

    Energy Technology Data Exchange (ETDEWEB)

    Noda, Y.; Nakayama, Y.; Arai, T. (Kawada Construction Co. Ltd., Tokyo (Japan))

    1994-03-15

    For the long-span prestressed concrete bridge (continuous box girder and cable stayed bridge), the design and construction control becomes increasingly complicated as construction proceeds because of its cyclic works. This paper describes the method and operation of an automatic levelling module using CCD camera and the experimental results by this system. For this automatic levelling system, the altitude can be automatically measured by measuring the center location of gravity of the target on the bridge surface using CCD camera. The present deflection control system developed compares the measured value by the automatic levelling system with the design value obtained by the design calculation system, and manages them. From the real-time continuous measurement for the long term, in which the CCD camera was set on the bridge surface, it was found that the stable measurement accuracy can be obtained. Successful application of this system demonstrates that the system is an effective and efficient construction aid. 11 refs., 19 figs., 1 tab.

  10. A randomized controlled trial of an educational video to improve quality of bowel preparation for colonoscopy.

    Science.gov (United States)

    Park, Jin-Seok; Kim, Min Su; Kim, HyungKil; Kim, Shin Il; Shin, Chun Ho; Lee, Hyun Jung; Lee, Won Seop; Moon, Soyoung

    2016-06-17

    High-quality bowel preparation is necessary for colonoscopy. A few studies have been conducted to investigate improvement in bowel preparation quality through patient education. However, the effect of patient education on bowel preparation has not been well studied. A randomized and prospective study was conducted. All patients received regular instruction for bowel preparation during a pre-colonoscopy visit. Those scheduled for colonoscopy were randomly assigned to view an educational video instruction (video group) on the day before the colonoscopy, or to a non-video (control) group. Qualities of bowel preparation using the Ottawa Bowel Preparation Quality scale (Ottawa score) were compared between the video and non-video groups. In addition, factors associated with poor bowel preparation were investigated. A total of 502 patients were randomized, 250 to the video group and 252 to the non-video group. The video group exhibited better bowel preparation (mean Ottawa total score: 3.03 ± 1.9) than the non-video group (4.21 ± 1.9; P educational video could improve the quality of bowel preparation in comparison with standard preparation method. Clinical Research Information Service KCT0001836 . The date of registration: March, 08(th), 2016, Retrospectively registered.

  11. Survey of potential use of dynamic line phantom for quality control of Gamma camera

    International Nuclear Information System (INIS)

    Trindev, P.; Ozturk, N.

    2004-01-01

    Different phantoms, used to evaluate the essential for image quality parameters of gamma cameras in order to avoid artefacts, are presented. The prices are significant and it is a sensible approach to optimise the type and number of phantoms necessary for quality control. Among all phantoms the price of 'Dynamic Line Phantom' (DLP) is impressive, but it is announced to substitute several 'passive' and 'active' phantoms. The goal of this paper is to justify this statement. The programs, based on image profile are discussed in the paper and the practical uses of the different programs are given

  12. A video imaging system and related control hardware for nuclear safeguards surveillance applications

    International Nuclear Information System (INIS)

    Whichello, J.V.

    1987-03-01

    A novel video surveillance system has been developed for safeguards applications in nuclear installations. The hardware was tested at a small experimental enrichment facility located at the Lucas Heights Research Laboratories. The system uses digital video techniques to store, encode and transmit still television pictures over the public telephone network to a receiver located in the Australian Safeguards Office at Kings Cross, Sydney. A decoded, reconstructed picture is then obtained using a second video frame store. A computer-controlled video cassette recorder is used automatically to archive the surveillance pictures. The design of the surveillance system is described with examples of its operation

  13. Active Video Game Exercise Training Improves the Clinical Control of Asthma in Children: Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Evelim L F D Gomes

    Full Text Available The aim of the present study was to determine whether aerobic exercise involving an active video game system improved asthma control, airway inflammation and exercise capacity in children with moderate to severe asthma.A randomized, controlled, single-blinded clinical trial was carried out. Thirty-six children with moderate to severe asthma were randomly allocated to either a video game group (VGG; N = 20 or a treadmill group (TG; n = 16. Both groups completed an eight-week supervised program with two weekly 40-minute sessions. Pre-training and post-training evaluations involved the Asthma Control Questionnaire, exhaled nitric oxide levels (FeNO, maximum exercise testing (Bruce protocol and lung function.No differences between the VGG and TG were found at the baseline. Improvements occurred in both groups with regard to asthma control and exercise capacity. Moreover, a significant reduction in FeNO was found in the VGG (p < 0.05. Although the mean energy expenditure at rest and during exercise training was similar for both groups, the maximum energy expenditure was higher in the VGG.The present findings strongly suggest that aerobic training promoted by an active video game had a positive impact on children with asthma in terms of clinical control, improvement in their exercise capacity and a reduction in pulmonary inflammation.Clinicaltrials.gov NCT01438294.

  14. Active Video Game Exercise Training Improves the Clinical Control of Asthma in Children: Randomized Controlled Trial

    Science.gov (United States)

    Gomes, Evelim L. F. D.; Carvalho, Celso R. F.; Peixoto-Souza, Fabiana Sobral; Teixeira-Carvalho, Etiene Farah; Mendonça, Juliana Fernandes Barreto; Stirbulov, Roberto; Sampaio, Luciana Maria Malosá; Costa, Dirceu

    2015-01-01

    Objective The aim of the present study was to determine whether aerobic exercise involving an active video game system improved asthma control, airway inflammation and exercise capacity in children with moderate to severe asthma. Design A randomized, controlled, single-blinded clinical trial was carried out. Thirty-six children with moderate to severe asthma were randomly allocated to either a video game group (VGG; N = 20) or a treadmill group (TG; n = 16). Both groups completed an eight-week supervised program with two weekly 40-minute sessions. Pre-training and post-training evaluations involved the Asthma Control Questionnaire, exhaled nitric oxide levels (FeNO), maximum exercise testing (Bruce protocol) and lung function. Results No differences between the VGG and TG were found at the baseline. Improvements occurred in both groups with regard to asthma control and exercise capacity. Moreover, a significant reduction in FeNO was found in the VGG (p video game had a positive impact on children with asthma in terms of clinical control, improvementin their exercise capacity and a reductionin pulmonary inflammation. Trial Registration Clinicaltrials.gov NCT01438294 PMID:26301706

  15. Active Video Game Exercise Training Improves the Clinical Control of Asthma in Children: Randomized Controlled Trial.

    Science.gov (United States)

    Gomes, Evelim L F D; Carvalho, Celso R F; Peixoto-Souza, Fabiana Sobral; Teixeira-Carvalho, Etiene Farah; Mendonça, Juliana Fernandes Barreto; Stirbulov, Roberto; Sampaio, Luciana Maria Malosá; Costa, Dirceu

    2015-01-01

    The aim of the present study was to determine whether aerobic exercise involving an active video game system improved asthma control, airway inflammation and exercise capacity in children with moderate to severe asthma. A randomized, controlled, single-blinded clinical trial was carried out. Thirty-six children with moderate to severe asthma were randomly allocated to either a video game group (VGG; N = 20) or a treadmill group (TG; n = 16). Both groups completed an eight-week supervised program with two weekly 40-minute sessions. Pre-training and post-training evaluations involved the Asthma Control Questionnaire, exhaled nitric oxide levels (FeNO), maximum exercise testing (Bruce protocol) and lung function. No differences between the VGG and TG were found at the baseline. Improvements occurred in both groups with regard to asthma control and exercise capacity. Moreover, a significant reduction in FeNO was found in the VGG (p video game had a positive impact on children with asthma in terms of clinical control, improvement in their exercise capacity and a reduction in pulmonary inflammation. Clinicaltrials.gov NCT01438294.

  16. Cross-Layer Design of Source Rate Control and Congestion Control for Wireless Video Streaming

    Directory of Open Access Journals (Sweden)

    Peng Zhu

    2007-01-01

    Full Text Available Cross-layer design has been used in streaming video over the wireless channels to optimize the overall system performance. In this paper, we extend our previous work on joint design of source rate control and congestion control for video streaming over the wired channel, and propose a cross-layer design approach for wireless video streaming. First, we extend the QoS-aware congestion control mechanism (TFRCC proposed in our previous work to the wireless scenario, and provide a detailed discussion about how to enhance the overall performance in terms of rate smoothness and responsiveness of the transport protocol. Then, we extend our previous joint design work to the wireless scenario, and a thorough performance evaluation is conducted to investigate its performance. Simulation results show that by cross-layer design of source rate control at application layer and congestion control at transport layer, and by taking advantage of the MAC layer information, our approach can avoid the throughput degradation caused by wireless link error, and better support the QoS requirements of the application. Thus, the playback quality is significantly improved, while good performance of the transport protocol is still preserved.

  17. A negative association between video game experience and proactive cognitive control.

    Science.gov (United States)

    Bailey, Kira; West, Robert; Anderson, Craig A

    2010-01-01

    Some evidence demonstrates that video game experience has a beneficial effect on visuospatial cognition. In contrast, other evidence indicates that video game experience may be negatively related to cognitive control. In this study we examined the specificity of the influence of video game experience on cognitive control. Participants with high and low video game experience performed the Stroop task while event-related brain potentials were recorded. The behavioral data revealed no difference between high and low gamers for the Stroop interference effect and a reduction in the conflict adaptation effect in high gamers. The amplitude of the medial frontal negativity and a frontal slow wave was attenuated in high gamers, and there was no effect of gaming status on the conflict slow potential. These data lead to the suggestion that video game experience has a negative influence on proactive, but not reactive, cognitive control.

  18. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  19. Head-coupled remote stereoscopic camera system for telepresence applications

    Science.gov (United States)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  20. Automation of pharmaceutical warehouse using groups robots with remote climate control and video surveillance

    OpenAIRE

    Zhuravska, I. M.; Popel, M. I.

    2015-01-01

    In this paper, we present a complex solution for automation pharmaceutical warehouse, including the implementation of climate-control, video surveillance with remote access to video, robotics selection of medicine with the optimization of the robot motion. We describe all the elements of local area network (LAN) necessary to solve all these problems.

  1. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department

    International Nuclear Information System (INIS)

    Mathers, Sandra A.; Anderson, Helen; McDonald, Sheila; Chesson, Rosemary A.

    2010-01-01

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be extremely time-consuming. This was despite the modest

  2. Realization of the ergonomics design and automatic control of the fundus cameras

    Science.gov (United States)

    Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye

    2012-12-01

    The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.

  3. An X-ray camera for single-crystal studies at high temperatures under controlled atmosphere

    International Nuclear Information System (INIS)

    Adlhart, W.; Tzafaras, N.; Sueno, S.; Jagodzinski, H.; Huber, H.

    1982-01-01

    A vacuum heating camera has been developed for extremely low background X-ray film work between room temperature and 2000 K. It can be used with modified conventional Weissenberg goniometers and with a specially designed focusing goniometer. The temperature control is maintained by a Pt/Pt-10% Rh thermocouple, a three-term proportional, integral and derivative (PID) controller and a programmable power supply. The accuracy in the absolute temperature setting is 10 K, the stability better than 1 K and the maximum thermal gradient over the crystal 7 K mm -1 at 1330 K. A small oxygen pressure can be applied, depending on the temperature, to control oxidation or reduction reactions of the sample. (Auth.)

  4. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  5. Do Instructional Videos on Sputum Submission Result in Increased Tuberculosis Case Detection? A Randomized Controlled Trial.

    Science.gov (United States)

    Mhalu, Grace; Hella, Jerry; Doulla, Basra; Mhimbira, Francis; Mtutu, Hawa; Hiza, Helen; Sasamalo, Mohamed; Rutaihwa, Liliana; Rieder, Hans L; Seimon, Tamsyn; Mutayoba, Beatrice; Weiss, Mitchell G; Fenner, Lukas

    2015-01-01

    We examined the effect of an instructional video about the production of diagnostic sputum on case detection of tuberculosis (TB), and evaluated the acceptance of the video. Randomized controlled trial. We prepared a culturally adapted instructional video for sputum submission. We analyzed 200 presumptive TB cases coughing for more than two weeks who attended the outpatient department of the governmental Municipal Hospital in Mwananyamala (Dar es Salaam, Tanzania). They were randomly assigned to either receive instructions on sputum submission using the video before submission (intervention group, n = 100) or standard of care (control group, n = 100). Sputum samples were examined for volume, quality and presence of acid-fast bacilli by experienced laboratory technicians blinded to study groups. Median age was 39.1 years (interquartile range 37.0-50.0); 94 (47%) were females, 106 (53%) were males, and 49 (24.5%) were HIV-infected. We found that the instructional video intervention was associated with detection of a higher proportion of microscopically confirmed cases (56%, 95% confidence interval [95% CI] 45.7-65.9%, sputum smear positive patients in the intervention group versus 23%, 95% CI 15.2-32.5%, in the control group, p sex, modified the effectiveness of the intervention by improving it positively. When asked how well the video instructions were understood, the majority of patients in the intervention group reported to have understood the video instructions well (97%). Most of the patients thought the video would be useful in the cultural setting of Tanzania (92%). Sputum submission instructional videos increased the yield of tuberculosis cases through better quality of sputum samples. If confirmed in larger studies, instructional videos may have a substantial effect on the case yield using sputum microscopy and also molecular tests. This low-cost strategy should be considered as part of the efforts to control TB in resource-limited settings. Pan African

  6. Realisation of a gamma emission tomograph by a servo-controlled camera and bed

    International Nuclear Information System (INIS)

    Guzman-Torres, D.R.

    1980-07-01

    We took part in the building of a transverse axial emission tomograph intended for nuclear medicine. The following three points were dealt with: mathematical, choice of processing algorithm; electronic, development of equipment; experimental, testing of the system built. On the mathematical side, following a survey of reconstruction methods, we studied the use of a reconstruction algorithm after filtering of the projections by convolution which gives a good spatial resolution. We also proposed a means to solve the computing time/quality of image problem, leading to a satisfactory result within a shorter total investigation time. In this way the computing time has been reduced by a factor three. In the electronics field we built an interface between the bed, the gamma camera and the computer already in the laboratory. The present instrument corresponds to version no. 2. The system control the bed and gamma camera which are operated from the computer. Experimentally we were able on checking the calculations with a phantom made up of small emitting sources, to prove by finding the exact spot our ability to locate active foci on the patient. While the results obtained are encouraging from the image restitution viewpoint, the study of problems related to self-absorption inside the organ and those of statistical noise have still to be continued [fr

  7. An atlas of artefacts: a teaching tool for scintillation camera quality control

    International Nuclear Information System (INIS)

    Busemann Sokole, E.; Graham, L.S.; Todd-Pokropek, A.; Wegst, A.; Robilotta, C.C.

    2002-01-01

    Aim: Evaluating results from quality control tests and recognizing possible artefacts in clinical images requires an insight into typical and atypical images and quantitative results. Learning to recognize what is acceptable requires access to different examples, and knowledge of underlying principles of image production. An atlas of image artefacts has been assembled to assist with interpreting quality control tests and recognizing artefacts in clinical images. The project was initiated and supported by the International Atomic Energy Agency (IAEA). Methods: The artefact atlas was developed and written by the first author of this abstract, based on images and information submitted by nuclear medicine users from around the world. Each image artefact example includes an accompanying descriptive text comprising a brief description of the data acquisition, radionuclide/radiopharmaceutical, circumstances under which the image was produced; results describing the images and subsequent conclusions; comments, as appropriate, giving guidelines for trouble shooting and follow-up; and, occasional literature references. The images supplied were mostly hardcopy results on film or paper. All images were digitized into JPEG format for inclusion into a digital document. Most examples were contained on one page. The atlas was reviewed by an international group of experts. Results: A total of about 250 examples from 26 centres in 16 countries were included. The examples are grouped into 6 sections: planar, SPECT, whole body, camera/computer interface, environment/radioactivity, and display/hardcopy. The atlas includes normal results, results from poor adjustment of the camera system, poor results obtained at acceptance testing, artefacts due to system malfunction, and artefacts due to environmental situations. Some image patterns are generic, others specific to system design. Results from different camera systems, as well as new and old generation systems have been included

  8. User interface using a 3D model for video surveillance

    Science.gov (United States)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  9. Numerical simulations and analyses of temperature control loop heat pipe for space CCD camera

    Science.gov (United States)

    Meng, Qingliang; Yang, Tao; Li, Chunlin

    2016-10-01

    As one of the key units of space CCD camera, the temperature range and stability of CCD components affect the image's indexes. Reasonable thermal design and robust thermal control devices are needed. One kind of temperature control loop heat pipe (TCLHP) is designed, which highly meets the thermal control requirements of CCD components. In order to study the dynamic behaviors of heat and mass transfer of TCLHP, particularly in the orbital flight case, a transient numerical model is developed by using the well-established empirical correlations for flow models within three dimensional thermal modeling. The temperature control principle and details of mathematical model are presented. The model is used to study operating state, flow and heat characteristics based upon the analyses of variations of temperature, pressure and quality under different operating modes and external heat flux variations. The results indicate that TCLHP can satisfy the thermal control requirements of CCD components well, and always ensure good temperature stability and uniformity. By comparison between flight data and simulated results, it is found that the model is to be accurate to within 1°C. The model can be better used for predicting and understanding the transient performance of TCLHP.

  10. Developing participatory research in radiology: the use of a graffiti wall, cameras and a video box in a Scottish radiology department

    Energy Technology Data Exchange (ETDEWEB)

    Mathers, Sandra A. [Aberdeen Royal Infirmary, Department of Radiology, Aberdeen (United Kingdom); The Robert Gordon University, Faculty of Health and Social Care, Aberdeen (United Kingdom); Anderson, Helen [Royal Aberdeen Children' s Hospital, Department of Radiology, Aberdeen (United Kingdom); McDonald, Sheila [Royal Aberdeen Children' s Hospital, Aberdeen (United Kingdom); Chesson, Rosemary A. [University of Aberdeen, School of Medicine and Dentistry, Aberdeen (United Kingdom)

    2010-03-15

    Participatory research is increasingly advocated for use in health and health services research and has been defined as a 'process of producing new knowledge by systematic enquiry, with the collaboration of those being studied'. The underlying philosophy of participatory research is that those recruited to studies are acknowledged as experts who are 'empowered to truly participate and have their voices heard'. Research methods should enable children to express themselves. This has led to the development of creative approaches of working with children that offer alternatives to, for instance, the structured questioning of children by researchers either through questionnaires or interviews. To examine the feasibility and potential of developing participatory methods in imaging research. We employed three innovative methods of data collection sequentially, namely the provision of: 1) a graffiti wall; 2) cameras, and 3) a video box for children's use. While the graffiti wall was open to all who attended the department, for the other two methods children were allocated to each 'arm' consecutively until our target of 20 children for each was met. The study demonstrated that it was feasible to use all three methods of data collection within the context of a busy radiology department. We encountered no complaints from staff, patients or parents. Children were willing to participate but we did not collect data to establish if they enjoyed the activities, were pleased to have the opportunity to make comments or whether anxieties about their treatment inhibited their participation. The data yield was disappointing. In particular, children's contributions to the graffiti wall were limited, but did reflect the nature of graffiti, and there may have been some 'copycat' comments. Although data analysis was relatively straightforward, given the nature of the data (short comments and simple drawings), the process proved to be

  11. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    International Nuclear Information System (INIS)

    Anderson, Robert J.

    2014-01-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  12. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  13. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  14. The implementation of quality controls of gamma camera functioning and simulation of tomography techniques by Gate and GEANT4

    International Nuclear Information System (INIS)

    Ben Ameur, Narjes

    2011-01-01

    The reliability of medical devices is directly linked to the services quality offered to the patient. For this reason, quality control tests should be regularly conducted in every nuclear medicine service according to international norms. Our approach consists on realizing different quality control tests recommended by the Nema norm on a gamma-camera in order to evaluate its performance. The obtained data allowed us to study the different physical phenomena happening during a SPECT exam. It also allowed us to identify those affecting the image quality based on the simulation programmes: GEANT 4 and Gate. The obtained results of the quality control showed that the Gamma-camera has a high performance in terms of spatial resolution, linearity, uniformity and rotational center. The establishment of a model for a gamma-camera Symbia E (Siemens) using a Gate platform confirms the reliability of this platform in the conception and the optimization of the detectors.

  15. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  16. Development of a time-of-flight Compton camera prototype for online control of ion therapy and medical imaging

    International Nuclear Information System (INIS)

    Ley, Jean-Luc

    2015-01-01

    Hadron-therapy is one of the modalities available for treating cancer. This modality uses light ions (protons, carbon ions) to destroy cancer cells. Such particles have a ballistic accuracy thanks to their quasi-rectilinear trajectory, their path and the finished profile maximum dose in the end. Compared to conventional radiotherapy, this allows to spare the healthy tissue located adjacent downstream and upstream of the tumor. One of this modality's quality assurance challenges is to control the positioning of the dose deposited by ions in the patient. One possibility to perform this control is to detect the prompt gammas emitted during nuclear reactions induced along the ion path in the patient. A Compton camera prototype, theoretically allowing to maximize the detection efficiency of the prompt gammas, is being developed under a regional collaboration. This camera was the main focus of my thesis, and particularly the following points: i) studying, throughout Monte Carlo simulations, the operation of the prototype in construction, particularly with respect to the expected counting rates on the different types of accelerators in hadron-therapy ii) conducting simulation studies on the use of this camera in clinical imaging, iii) characterising the silicon detectors (scatterer) iv) confronting Geant4 simulations on the camera's response with measurements on the beam with the help of a demonstrator. As a result, the Compton camera prototype developed makes a control of the localization of the dose deposition in proton therapy to the scale of a spot possible, provided that the intensity of the clinical proton beam is reduced by a factor 200 (intensity of 10 8 protons/s). An application of the Compton camera in nuclear medicine seems to be attainable with the use of radioisotopes of an energy greater than 300 keV. These initial results must be confirmed by more realistic simulations (homogeneous and heterogeneous PMMA targets). Tests with the progressive

  17. Current-Loop Control for the Pitching Axis of Aerial Cameras via an Improved ADRC

    Directory of Open Access Journals (Sweden)

    BingYou Liu

    2017-01-01

    Full Text Available An improved active disturbance rejection controller (ADRC is designed to eliminate the influences of the current-loop for the pitching axis control system of an aerial camera. The improved ADRC is composed of a tracking differentiator (TD, an improved extended state observer (ESO, an improved nonlinear state error feedback (NLSEF, and a disturbance compensation device (DCD. The TD is used to arrange transient process. The improved ESO is utilized to observe the state extended by nonlinear dynamics, model uncertainty, and external disturbances. Overtime variation of the current-loop can be predicted by the improved ESO. The improved NLSEF is adopted to restrain the residual errors of the current-loop. The DCD is used to compensate the overtime variation of the current-loop in real time. The improved ADRC is designed based on a new nonlinear function newfal(·. This function exhibits enhanced continuity and smoothness compared to previously available nonlinear functions. Thus, the new nonlinear function can effectively decrease the high-frequency flutter phenomenon. The improved ADRC exhibits improved control performance, and disturbances of the current-loop can be eliminated by the improved ADRC. Finally, simulation experiments are performed. Results show that the improved ADRC displayed better performance than the proportional integral (PI control strategy and traditional ADRC.

  18. Video game addiction in emerging adulthood: Cross-sectional evidence of pathology in video game addicts as compared to matched healthy controls.

    Science.gov (United States)

    Stockdale, Laura; Coyne, Sarah M

    2018-01-01

    The Internet Gaming Disorder Scale (IGDS) is a widely used measure of video game addiction, a pathology affecting a small percentage of all people who play video games. Emerging adult males are significantly more likely to be video game addicts. Few researchers have examined how people who qualify as video game addicts based on the IGDS compared to matched controls based on age, gender, race, and marital status. The current study compared IGDS video game addicts to matched non-addicts in terms of their mental, physical, social-emotional health using self-report, survey methods. Addicts had poorer mental health and cognitive functioning including poorer impulse control and ADHD symptoms compared to controls. Additionally, addicts displayed increased emotional difficulties including increased depression and anxiety, felt more socially isolated, and were more likely to display internet pornography pathological use symptoms. Female video game addicts were at unique risk for negative outcomes. The sample for this study was undergraduate college students and self-report measures were used. Participants who met the IGDS criteria for video game addiction displayed poorer emotional, physical, mental, and social health, adding to the growing evidence that video game addictions are a valid phenomenon. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. The effects of video game playing on attention, memory, and executive control.

    Science.gov (United States)

    Boot, Walter R; Kramer, Arthur F; Simons, Daniel J; Fabiani, Monica; Gratton, Gabriele

    2008-11-01

    Expert video game players often outperform non-players on measures of basic attention and performance. Such differences might result from exposure to video games or they might reflect other group differences between those people who do or do not play video games. Recent research has suggested a causal relationship between playing action video games and improvements in a variety of visual and attentional skills (e.g., [Green, C. S., & Bavelier, D. (2003). Action video game modifies visual selective attention. Nature, 423, 534-537]). The current research sought to replicate and extend these results by examining both expert/non-gamer differences and the effects of video game playing on tasks tapping a wider range of cognitive abilities, including attention, memory, and executive control. Non-gamers played 20+ h of an action video game, a puzzle game, or a real-time strategy game. Expert gamers and non-gamers differed on a number of basic cognitive skills: experts could track objects moving at greater speeds, better detected changes to objects stored in visual short-term memory, switched more quickly from one task to another, and mentally rotated objects more efficiently. Strikingly, extensive video game practice did not substantially enhance performance for non-gamers on most cognitive tasks, although they did improve somewhat in mental rotation performance. Our results suggest that at least some differences between video game experts and non-gamers in basic cognitive performance result either from far more extensive video game experience or from pre-existing group differences in abilities that result in a self-selection effect.

  20. Video game practice optimizes executive control skills in dual-task and task switching situations.

    Science.gov (United States)

    Strobach, Tilo; Frensch, Peter A; Schubert, Torsten

    2012-05-01

    We examined the relation of action video game practice and the optimization of executive control skills that are needed to coordinate two different tasks. As action video games are similar to real life situations and complex in nature, and include numerous concurrent actions, they may generate an ideal environment for practicing these skills (Green & Bavelier, 2008). For two types of experimental paradigms, dual-task and task switching respectively; we obtained performance advantages for experienced video gamers compared to non-gamers in situations in which two different tasks were processed simultaneously or sequentially. This advantage was absent in single-task situations. These findings indicate optimized executive control skills in video gamers. Similar findings in non-gamers after 15 h of action video game practice when compared to non-gamers with practice on a puzzle game clarified the causal relation between video game practice and the optimization of executive control skills. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  2. Joystick-controlled video console game practice for developing power wheelchairs users' indoor driving skills.

    Science.gov (United States)

    Huang, Wei Pin; Wang, Chia Cheng; Hung, Jo Hua; Chien, Kai Chun; Liu, Wen-Yu; Cheng, Chih-Hsiu; Ng, How-Hing; Lin, Yang-Hua

    2015-02-01

    [Purpose] This study aimed to determine the effectiveness of joystick-controlled video console games in enhancing subjects' ability to control power wheelchairs. [Subjects and Methods] Twenty healthy young adults without prior experience of driving power wheelchairs were recruited. Four commercially available video games were used as training programs to practice joystick control in catching falling objects, crossing a river, tracing the route while floating on a river, and navigating through a garden maze. An indoor power wheelchair driving test, including straight lines, and right and left turns, was completed before and after the video game practice, during which electromyographic signals of the upper limbs were recorded. The paired t-test was used to compare the differences in driving performance and muscle activities before and after the intervention. [Results] Following the video game intervention, participants took significantly less time to complete the course, with less lateral deviation when turning the indoor power wheelchair. However, muscle activation in the upper limbs was not significantly affected. [Conclusion] This study demonstrates the feasibility of using joystick-controlled commercial video games to train individuals in the control of indoor power wheelchairs.

  3. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    Science.gov (United States)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  4. CCD camera eases the control of a soda recovery boiler; CCD-kamera helpottaa soodakattilan valvontaa

    Energy Technology Data Exchange (ETDEWEB)

    Kinnunen, L.

    2001-07-01

    Fortum Technology has developed a CCD firebox camera, based on semiconductor technology, enduring hard conditions of soda recovery boiler longer than traditional cameras. The firebox camera air- cooled and the same air is pressed over the main lens so it remains clean despite of the alkaline liquor splashing around in the boiler. The image of the boiler is transferred through the main lens, image transfer lens and a special filter, mounted inside the camera tube, into the CCD camera. The first CCD camera system has been in use since 1999 in Sunila pulp mill in Kotka, owned by Myllykoski Oy and Enso Oyj. The mill has two medium-sized soda recovery boilers. The amount of black liquor, formed daily, is about 2000 tons DS, which is more than enough for the heat generation. Even electric power generation exceeds sometimes the demand, so the surplus power can be sold out. Black liquor is sprayed inside the soda recovery boiler with high pressure. The liquor form droplets in the boiler, the temperature of which is over 1000 deg C. A full-hot pile is formed at the bottom of the boiler after burning. The size and shape of the pile effect on the efficiency and the emissions of the boiler. The camera has operated well.

  5. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  6. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  7. Do Motion Controllers Make Action Video Games Less Sedentary? A Randomized Experiment

    OpenAIRE

    Lyons, Elizabeth J.; Tate, Deborah F.; Ward, Dianne S.; Ribisl, Kurt M.; Bowling, J. Michael; Kalyanaraman, Sriram

    2012-01-01

    Sports- and fitness-themed video games using motion controllers have been found to produce physical activity. It is possible that motion controllers may also enhance energy expenditure when applied to more sedentary games such as action games. Young adults (N = 100) were randomized to play three games using either motion-based or traditional controllers. No main effect was found for controller or game pair (P > .12). An interaction was found such that in one pair, motion control (mean [SD] 0....

  8. Human recognition in a video network

    Science.gov (United States)

    Bhanu, Bir

    2009-10-01

    Video networks is an emerging interdisciplinary field with significant and exciting scientific and technological challenges. It has great promise in solving many real-world problems and enabling a broad range of applications, including smart homes, video surveillance, environment and traffic monitoring, elderly care, intelligent environments, and entertainment in public and private spaces. This paper provides an overview of the design of a wireless video network as an experimental environment, camera selection, hand-off and control, anomaly detection. It addresses challenging questions for individual identification using gait and face at a distance and present new techniques and their comparison for robust identification.

  9. Do Instructional Videos on Sputum Submission Result in Increased Tuberculosis Case Detection? A Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Grace Mhalu

    Full Text Available We examined the effect of an instructional video about the production of diagnostic sputum on case detection of tuberculosis (TB, and evaluated the acceptance of the video.Randomized controlled trial.We prepared a culturally adapted instructional video for sputum submission. We analyzed 200 presumptive TB cases coughing for more than two weeks who attended the outpatient department of the governmental Municipal Hospital in Mwananyamala (Dar es Salaam, Tanzania. They were randomly assigned to either receive instructions on sputum submission using the video before submission (intervention group, n = 100 or standard of care (control group, n = 100. Sputum samples were examined for volume, quality and presence of acid-fast bacilli by experienced laboratory technicians blinded to study groups.Median age was 39.1 years (interquartile range 37.0-50.0; 94 (47% were females, 106 (53% were males, and 49 (24.5% were HIV-infected. We found that the instructional video intervention was associated with detection of a higher proportion of microscopically confirmed cases (56%, 95% confidence interval [95% CI] 45.7-65.9%, sputum smear positive patients in the intervention group versus 23%, 95% CI 15.2-32.5%, in the control group, p <0.0001, an increase in volume of specimen defined as a volume ≥3ml (78%, 95% CI 68.6-85.7%, versus 45%, 95% CI 35.0-55.3%, p <0.0001, and specimens less likely to be salivary (14%, 95% CI 7.9-22.4%, versus 39%, 95% CI 29.4-49.3%, p = 0.0001. Older age, but not the HIV status or sex, modified the effectiveness of the intervention by improving it positively. When asked how well the video instructions were understood, the majority of patients in the intervention group reported to have understood the video instructions well (97%. Most of the patients thought the video would be useful in the cultural setting of Tanzania (92%.Sputum submission instructional videos increased the yield of tuberculosis cases through better quality of sputum

  10. Live lecture versus video podcast in undergraduate medical education: A randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Fukuta Junaid

    2010-10-01

    Full Text Available Abstract Background Information technology is finding an increasing role in the training of medical students. We compared information recall and student experience and preference after live lectures and video podcasts in undergraduate medical education. Methods We performed a crossover randomised controlled trial. 100 students were randomised to live lecture or video podcast for one clinical topic. Live lectures were given by the same instructor as the narrator of the video podcasts. The video podcasts comprised Powerpoint™ slides narrated using the same script as the lecture. They were then switched to the other group for a second clinical topic. Knowledge was assessed using multiple choice questions and qualitative information was collected using a questionnaire. Results No significant difference was found on multiple choice questioning immediately after the session. The subjects enjoyed the convenience of the video podcast and the ability to stop, review and repeat it, but found it less engaging as a teaching method. They expressed a clear preference for the live lecture format. Conclusions We suggest that video podcasts are not ready to replace traditional teaching methods, but may have an important role in reinforcing learning and aiding revision.

  11. Live lecture versus video podcast in undergraduate medical education: A randomised controlled trial.

    Science.gov (United States)

    Schreiber, Benjamin E; Fukuta, Junaid; Gordon, Fabiana

    2010-10-08

    Information technology is finding an increasing role in the training of medical students. We compared information recall and student experience and preference after live lectures and video podcasts in undergraduate medical education. We performed a crossover randomised controlled trial. 100 students were randomised to live lecture or video podcast for one clinical topic. Live lectures were given by the same instructor as the narrator of the video podcasts. The video podcasts comprised Powerpoint™ slides narrated using the same script as the lecture. They were then switched to the other group for a second clinical topic. Knowledge was assessed using multiple choice questions and qualitative information was collected using a questionnaire. No significant difference was found on multiple choice questioning immediately after the session. The subjects enjoyed the convenience of the video podcast and the ability to stop, review and repeat it, but found it less engaging as a teaching method. They expressed a clear preference for the live lecture format. We suggest that video podcasts are not ready to replace traditional teaching methods, but may have an important role in reinforcing learning and aiding revision.

  12. Personalized Video Feedback and Repeated Task Practice Improve Laparoscopic Knot-Tying Skills: Two Controlled Trials.

    Science.gov (United States)

    Abbott, Eduardo F; Thompson, Whitney; Pandian, T K; Zendejas, Benjamin; Farley, David R; Cook, David A

    2017-11-01

    Compare the effect of personalized feedback (PF) vs. task demonstration (TD), both delivered via video, on laparoscopic knot-tying skills and perceived workload; and evaluate the effect of repeated practice. General surgery interns and research fellows completed four repetitions of a simulated laparoscopic knot-tying task at one-month intervals. Midway between repetitions, participants received via e-mail either a TD video (demonstration by an expert) or a PF video (video of their own performance with voiceover from a blinded senior surgeon). Each participant received at least one video per format, with sequence randomly assigned. Outcomes included performance scores and NASA Task Load Index (NASA-TLX) scores. To evaluate the effectiveness of repeated practice, scores from these trainees on a separate delayed retention test were compared against historical controls who did not have scheduled repetitions. Twenty-one trainees completed the randomized study. Mean change in performance scores was significantly greater for those receiving PF (difference = 23.1 of 150 [95% confidence interval (CI): 0, 46.2], P = .05). Perceived workload was also significantly reduced (difference = -3.0 of 20 [95% CI: -5.8, -0.3], P = .04). Compared with historical controls (N = 93), the 21 with scheduled repeated practice had higher scores on the laparoscopic knot-tying assessment two weeks after the final repetition (difference = 1.5 of 10 [95% CI: 0.2, 2.8], P = .02). Personalized video feedback improves trainees' procedural performance and perceived workload compared with a task demonstration video. Brief monthly practice sessions support skill acquisition and retention.

  13. Methods and Algorithms for Detecting Objects in Video Files

    Directory of Open Access Journals (Sweden)

    Nguyen The Cuong

    2018-01-01

    Full Text Available Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.

  14. Outcomes of Video-Assisted Teaching for Latching in Postpartum Women: A Randomized Controlled Trial.

    Science.gov (United States)

    Sroiwatana, Suttikamon; Puapornpong, Pawin

    2018-04-25

    Latching is an important process of breastfeeding and should be taught and practiced by the postpartum mother. The objective is to compare latching outcomes between video-assisted and routine teaching methods among postpartum women. A randomized controlled trial was conducted. Postpartum women who had deliveries without complications were randomized into two groups: 14 cases in the video-assisted teaching group and 14 cases in a routine teaching group. In the first group, the mothers were taught breastfeeding benefits, latching methods, and breastfeeding positions and practiced breastfeeding in a controlled setting for a 30-minute period and watched a 6-minute video with consistent content. In the second group, the mothers were taught a normal 30-minute period and then practiced breastfeeding. In both groups, Latching on, Audible swallowing, the Type of nipples, Comfort, and Help (LATCH) scores were assessed at 24-32 and 48-56 hours after the breastfeeding teaching modals. Demographic data and LATCH scores were collected and analyzed. There were no statistically significant differences in the mothers' ages, occupations, marital status, religion, education, income, infants' gestational age, body mass index, nipple length, route of delivery, and time to first latching between the video-assisted and routine breastfeeding teaching groups. First and second LATCH score assessments had shown no significant differences between both breastfeeding teaching groups. The video-assisted breastfeeding teaching did not improve latching outcomes when it was compared with routine teaching.

  15. A theory-based video messaging mobile phone intervention for smoking cessation: randomized controlled trial.

    Science.gov (United States)

    Whittaker, Robyn; Dorey, Enid; Bramley, Dale; Bullen, Chris; Denny, Simon; Elley, C Raina; Maddison, Ralph; McRobbie, Hayden; Parag, Varsha; Rodgers, Anthony; Salmon, Penny

    2011-01-21

    Advances in technology allowed the development of a novel smoking cessation program delivered by video messages sent to mobile phones. This social cognitive theory-based intervention (called "STUB IT") used observational learning via short video diary messages from role models going through the quitting process to teach behavioral change techniques. The objective of our study was to assess the effectiveness of a multimedia mobile phone intervention for smoking cessation. A randomized controlled trial was conducted with 6-month follow-up. Participants had to be 16 years of age or over, be current daily smokers, be ready to quit, and have a video message-capable phone. Recruitment targeted younger adults predominantly through radio and online advertising. Registration and data collection were completed online, prompted by text messages. The intervention group received an automated package of video and text messages over 6 months that was tailored to self-selected quit date, role model, and timing of messages. Extra messages were available on demand to beat cravings and address lapses. The control group also set a quit date and received a general health video message sent to their phone every 2 weeks. The target sample size was not achieved due to difficulty recruiting young adult quitters. Of the 226 randomized participants, 47% (107/226) were female and 24% (54/226) were Maori (indigenous population of New Zealand). Their mean age was 27 years (SD 8.7), and there was a high level of nicotine addiction. Continuous abstinence at 6 months was 26.4% (29/110) in the intervention group and 27.6% (32/116) in the control group (P = .8). Feedback from participants indicated that the support provided by the video role models was important and appreciated. This study was not able to demonstrate a statistically significant effect of the complex video messaging mobile phone intervention compared with simple general health video messages via mobile phone. However, there was

  16. Effects of active video games on body composition: a randomized controlled trial.

    Science.gov (United States)

    Maddison, Ralph; Foley, Louise; Ni Mhurchu, Cliona; Jiang, Yannan; Jull, Andrew; Prapavessis, Harry; Hohepa, Maea; Rodgers, Anthony

    2011-07-01

    Sedentary activities such as video gaming are independently associated with obesity. Active video games, in which players physically interact with images on screen, may help increase physical activity and improve body composition. The aim of this study was to evaluate the effect of active video games over a 6-mo period on weight, body composition, physical activity, and physical fitness. We conducted a 2-arm, parallel, randomized controlled trial in Auckland, New Zealand. A total of 322 overweight and obese children aged 10-14 y, who were current users of sedentary video games, were randomly assigned at a 1:1 ratio to receive either an active video game upgrade package (intervention, n = 160) or to have no change (control group, n = 162). The primary outcome was the change from baseline in body mass index (BMI; in kg/m(2)). Secondary outcomes were changes in percentage body fat, physical activity, cardiorespiratory fitness, video game play, and food snacking. At 24 wk, the treatment effect on BMI (-0.24; 95% CI: -0.44, -0.05; P = 0.02) favored the intervention group. The change (±SE) in BMI from baseline increased in the control group (0.34 ± 0.08) but remained the same in the intervention group (0.09 ± 0.08). There was also evidence of a reduction in body fat in the intervention group (-0.83%; 95% CI: -1.54%, -0.12%; P = 0.02). The change in daily time spent playing active video games at 24 wk increased (10.03 min; 95% CI: 6.26, 13.81 min; P video games (-9.39 min; 95% CI: -19.38, 0.59 min; P = 0.06). An active video game intervention has a small but definite effect on BMI and body composition in overweight and obese children. This trial was registered in the Australian New Zealand Clinical Trials Registry at http://www.anzctr.org.au/ as ACTRN12607000632493.

  17. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees

    Directory of Open Access Journals (Sweden)

    Paula Jimena Ramos Giraldo

    2017-04-01

    Full Text Available Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  18. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees.

    Science.gov (United States)

    Giraldo, Paula Jimena Ramos; Aguirre, Álvaro Guerrero; Muñoz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio

    2017-04-06

    Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: ( i ) Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and ( ii ) Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  19. A television/still camera with common optical system for reactor inspection

    International Nuclear Information System (INIS)

    Hughes, G.; McBane, P.

    1976-01-01

    One of the problems of reactor inspection is to obtain permanent high quality records. Video recordings provide a record of poor quality but known content. Still cameras can be used but the frame content is not predictable. Efforts have been made to combine T.V. viewing to align a still camera but a simple combination does not provide the same frame size. The necessity to preset the still camera controls severely restricts the flexibility of operation. A camera has, therefore, been designed which allows a search operation using the T.V. system. When an anomaly is found the still camera controls can be remotely set, an exact record obtained and the search operation continued without removal from the reactor. An application of this camera in the environment of the blanket gas region above the sodium region in PFR at 150 0 C is described

  20. Video Coaching as an Efficient Teaching Method for Surgical Residents-A Randomized Controlled Trial.

    Science.gov (United States)

    Soucisse, Mikael L; Boulva, Kerianne; Sideris, Lucas; Drolet, Pierre; Morin, Michel; Dubé, Pierre

    As surgical training is evolving and operative exposure is decreasing, new, effective, and experiential learning methods are needed to ensure surgical competency and patient safety. Video coaching is an emerging concept in surgery that needs further investigation. In this randomized controlled trial conducted at a single teaching hospital, participating residents were filmed performing a side-to-side intestinal anastomosis on cadaveric dog bowel for baseline assessment. The Surgical Video Coaching (SVC) group then participated in a one-on-one video playback coaching and debriefing session with a surgeon, during which constructive feedback was given. The control group went on with their normal clinical duties without coaching or debriefing. All participants were filmed making a second intestinal anastomosis. This was compared to their first anastomosis using a 7-category-validated technical skill global rating scale, the Objective Structured Assessment of Technical Skills. A single independent surgeon who did not participate in coaching or debriefing to the SVC group reviewed all videos. A satisfaction survey was then sent to the residents in the coaching group. Department of Surgery, HôpitalMaisonneuve-Rosemont, tertiary teaching hospital affiliated to the University of Montreal, Canada. General surgery residents from University of Montreal were recruited to take part in this trial. A total of 28 residents were randomized and completed the study. After intervention, the SVC group (n = 14) significantly increased their Objective Structured Assessment of Technical Skills score (mean of differences 3.36, [1.09-5.63], p = 0.007) when compared to the control group (n = 14) (mean of differences 0.29, p = 0.759). All residents agreed or strongly agreed that video coaching was a time-efficient teaching method. Video coaching is an effective and efficient teaching intervention to improve surgical residents' technical skills. Crown Copyright © 2017. Published by Elsevier

  1. The 3D Human Motion Control Through Refined Video Gesture Annotation

    Science.gov (United States)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  2. 76 FR 75911 - Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings

    Science.gov (United States)

    2011-12-05

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-743] Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. Section 337 of the Tariff Act of 1930 provides that if the Commission finds a violation it shall exclude the articles...

  3. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    DEFF Research Database (Denmark)

    Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen

    2003-01-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed...

  4. Video Demo: Deep Reinforcement Learning for Coordination in Traffic Light Control

    NARCIS (Netherlands)

    van der Pol, E.; Oliehoek, F.A.; Bosse, T.; Bredeweg, B.

    2016-01-01

    This video demonstration contrasts two approaches to coordination in traffic light control using reinforcement learning: earlier work, based on a deconstruction of the state space into a linear combination of vehicle states, and our own approach based on the Deep Q-learning algorithm.

  5. Moral disengagement moderates the effect of violent video games on self-control, cheating and aggression

    NARCIS (Netherlands)

    Gabbiadini, A.; Riva, P.; Andrighetto, L.; Volpato, C.; Bushman, B.J.

    2014-01-01

    Violent video games glorify and reward immoral behaviors (e.g., murder, assault, rape, robbery, arson, motor vehicle theft). Based on the moral disengagement theory, we predicted that violent games would increase multiple immoral behaviors (i.e., lack of self-control, cheating, aggression),

  6. Cross-Layer QoS Control for Video Communications over Wireless Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Pei Yong

    2005-01-01

    Full Text Available Assuming a wireless ad hoc network consisting of homogeneous video users with each of them also serving as a possible relay node for other users, we propose a cross-layer rate-control scheme based on an analytical study of how the effective video transmission rate is affected by the prevailing operating parameters, such as the interference environment, the number of transmission hops to a destination, and the packet loss rate. Furthermore, in order to provide error-resilient video delivery over such wireless ad hoc networks, a cross-layer joint source-channel coding (JSCC approach, to be used in conjunction with rate-control, is proposed and investigated. This approach attempts to optimally apply the appropriate channel coding rate given the constraints imposed by the effective transmission rate obtained from the proposed rate-control scheme, the allowable real-time video play-out delay, and the prevailing channel conditions. Simulation results are provided which demonstrate the effectiveness of the proposed cross-layer combined rate-control and JSCC approach.

  7. The Effects of Variations in Lesson Control and Practice on Learning from Interactive Video.

    Science.gov (United States)

    Hannafin, Michael J.; Colamaio, MaryAnne E.

    1987-01-01

    Discussion of the effects of variations in lesson control and practice on the learning of facts, procedures, and problem-solving skills during interactive video instruction focuses on a study of graduates and advanced level undergraduates learning cardiopulmonary resuscitation (CPR). Embedded questioning methods and posttests used are described.…

  8. Attention deficit/hyperactivity disorder and video games: a comparative study of hyperactive and control children.

    Science.gov (United States)

    Bioulac, Stéphanie; Arfi, Lisa; Bouvard, Manuel P

    2008-03-01

    This study describes and compares the behavior of hyperactive and control children playing video games. The sample consisted of 29 ADHD children and 21 controls aged between 6 and 16 years playing video games. We used the Child Behavior Checklist and the Problem Videogame Playing scale (PVP scale). This instrument gives objective measures of problem use, which can be considered as an indication of addictive videogame playing. We designed a questionnaire for the parents, eliciting qualitative information about their child's videogame playing. There were no significant differences concerning frequency or duration of play between ADHD children and controls but differences were observed on the PVP scale. None of the controls scored above four whereas 10 hyperactive children answered affirmatively to five or more questions. These children presented a greater intensity of the disorder than the other ADHD children. While no differences concerning video game use were found, ADHD children exhibited more problems associated with videogame playing. It seems that a subgroup of ADHD children could be vulnerable to developing dependence upon video games.

  9. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  10. Power-Constrained Fuzzy Logic Control of Video Streaming over a Wireless Interconnect

    Science.gov (United States)

    Razavi, Rouzbeh; Fleury, Martin; Ghanbari, Mohammed

    2008-12-01

    Wireless communication of video, with Bluetooth as an example, represents a compromise between channel conditions, display and decode deadlines, and energy constraints. This paper proposes fuzzy logic control (FLC) of automatic repeat request (ARQ) as a way of reconciling these factors, with a 40% saving in power in the worst channel conditions from economizing on transmissions when channel errors occur. Whatever the channel conditions are, FLC is shown to outperform the default Bluetooth scheme and an alternative Bluetooth-adaptive ARQ scheme in terms of reduced packet loss and delay, as well as improved video quality.

  11. A content analysis of smoking fetish videos on YouTube: regulatory implications for tobacco control.

    Science.gov (United States)

    Kim, Kyongseok; Paek, Hye-Jin; Lynn, Jordan

    2010-03-01

    This study examined the prevalence, accessibility, and characteristics of eroticized smoking portrayal, also referred to as smoking fetish, on YouTube. The analysis of 200 smoking fetish videos revealed that the smoking fetish videos are prevalent and accessible to adolescents on the website. They featured explicit smoking behavior by sexy, young, and healthy females, with the content corresponding to PG-13 and R movie ratings. We discuss a potential impact of the prosmoking image on youth according to social cognitive theory, and implications for tobacco control.

  12. Design, Implementation and Evaluation of Congestion Control Mechanism for Video Streaming

    OpenAIRE

    Hiroshi Noborio; Hiroyuki Hisamatsu; Hiroki Oda

    2011-01-01

    In recent years, video streaming services over TCP, such as YouTube, have become more and more popular. TCP NewReno, the current TCP standard, performs greedy congestion control, which increases the congestion window size until packet loss occurs. Therefore, because TCP transmits data at a much higher rate than the video playback rate, the probability of packet loss in the network increases, which in turn takes bandwidth from other network traffic. In this paper, we propose a new transport-la...

  13. Enhancing the control of force in putting by video game training.

    Science.gov (United States)

    Fery, Y A; Ponserre, S

    2001-10-10

    Even if golf video games provide no proprioceptive afferences on actual putting movement, they may give sufficient substitutive visual cues to enhance force control in this skill. It was hypothesized that this usefulness requires, however, two conditions: the video game must provide reliable demonstrations of actual putts, and the user must want to use the game to make progress in actual putting. Accordingly, a video game was selected on the basis of its fidelity to the real-world game. It allowed two different methods of adjusting the virtual player's putting force in order to hole a putt: an analogue method that consisted of focusing on the virtual player's movement and a symbolic method that consisted of focusing on the movement of a gauge on a scale representing the virtual player's putting force. The participants had to use one of these methods with either the intention of making progress in actual putting or in a second condition to simply enjoy the game. Results showed a positive transfer of video playing to actual putting skill for the learning group and also, to a lesser degree, for the enjoyment group; but only when they used the symbolic method. Results are discussed in the context of how vision may convey force cues in sports video games.

  14. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    International Nuclear Information System (INIS)

    Wright, R.; Zander, M.; Brown, S.; Sandoval, D.; Gilpatrick, D.; Gibson, H.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) is discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. (Author) (3 figs., 4 refs.)

  15. Exploring the use of entertainment-education YouTube videos focused on infection prevention and control.

    Science.gov (United States)

    Lim, Kathryn; Kilpatrick, Claire; Storr, Julie; Seale, Holly

    2018-06-05

    As a communications strategy, education entertainment has been used to inform, influence, and shift societal and individual behaviors. Recently, there has been an increasing number of entertainment-education YouTube videos focused on hand hygiene. However, there is currently no understanding about the quality of these videos; therefore, this study aimed to explore the social media content and user engagement with these videos. The search terms "hand hygiene" and "hand hygiene education" were used to query YouTube. Video content had to be directed at a health care professional audience. Using author designed checklists, each video was systematically evaluated and grouped according to educational usefulness and was subsequently evaluated against the categories of attractiveness, comprehension, and persuasiveness. A total of 400 videos were screened, with 70 videos retained for analysis. Of these, 55.7% (n = 39) were categorized as educationally useful. Overall, educationally useful videos scored higher than noneducationally useful videos across the categories of attractiveness, comprehension, and persuasiveness. Miscommunication of the concept of My 5 Moments for Hand Hygiene was observed in several of the YouTube videos. The availability of educationally useful videos in relation to hand hygiene is evident; however, it is clear that there are opportunities for contributors using this medium to strengthen their alignment with social media best practice principles to maximize the effectiveness, reach, and sustainability of their content. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  16. Technology consumption and cognitive control: Contrasting action video game experience with media multitasking.

    Science.gov (United States)

    Cardoso-Leite, Pedro; Kludt, Rachel; Vignola, Gianluca; Ma, Wei Ji; Green, C Shawn; Bavelier, Daphne

    2016-01-01

    Technology has the potential to impact cognition in many ways. Here we contrast two forms of technology usage: (1) media multitasking (i.e., the simultaneous consumption of multiple streams of media, such a texting while watching TV) and (2) playing action video games (a particular subtype of video games). Previous work has outlined an association between high levels of media multitasking and specific deficits in handling distracting information, whereas playing action video games has been associated with enhanced attentional control. Because these two factors are linked with reasonably opposing effects, failing to take them jointly into account may result in inappropriate conclusions as to the impacts of technology use on attention. Across four tasks (AX-continuous performance, N-back, task-switching, and filter tasks), testing different aspects of attention and cognition, we showed that heavy media multitaskers perform worse than light media multitaskers. Contrary to previous reports, though, the performance deficit was not specifically tied to distractors, but was instead more global in nature. Interestingly, participants with intermediate levels of media multitasking sometimes performed better than both light and heavy media multitaskers, suggesting that the effects of increasing media multitasking are not monotonic. Action video game players, as expected, outperformed non-video-game players on all tasks. However, surprisingly, this was true only for participants with intermediate levels of media multitasking, suggesting that playing action video games does not protect against the deleterious effect of heavy media multitasking. Taken together, these findings show that media consumption can have complex and counterintuitive effects on attentional control.

  17. 360° virtual reality video for the acquisition of knot tying skills: A randomised controlled trial.

    Science.gov (United States)

    Yoganathan, S; Finch, D A; Parkin, E; Pollard, J

    2018-04-10

    360° virtual reality (VR) video is an exciting and evolving field. Current technology promotes a totally immersive, 3-dimensional (3D), 360° experience anywhere in the world using simply a smart phone and virtual reality headset. The potential for its application in the field of surgical education is enormous. The aim of this study was to determine knot tying skills taught with a 360-degree VR video compared to conventional 2D video teaching. This trial was a prospective, randomised controlled study. 40 foundation year doctors (first year postgraduate) were randomised to either the 360-degree VR video (n = 20) or 2D video teaching (n = 20). Participants were given 15 min to watch their allocated video. Ability to tie a single handed reef knot was then assessed against a marking criteria developed for the Royal College of Surgeons, England, (RCSeng) Basic Surgical Skills (BSS) course, by a blinded assessor competent in knot tying. Each candidate then underwent further teaching using Peyton's four step model. Knot tying technique was then re-assessed. Knot tying scores were significantly better in the VR video teaching arm when compared with conventional (median knot score 5.0 vs 4.0 p = 0.04). When used in combination with face to face skills teaching this difference persisted (median knot score 9.5 vs 9.0 p = 0.01). More people in the VR arm constructed a complete reef knot than in the 2D arm following face to face teaching (17/20 vs 12/20). No difference between the groups existed in the time taken to construct a reef knot following video and teaching (median time 31.0s vs 30.5s p = 0.89). This study shows there is significant merit in the application of 360-degree VR video technology in surgical training, both as an independent teaching aid and when used as an adjunct to traditional face to face teaching. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  18. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  19. Video auto stitching in multicamera surveillance system

    Science.gov (United States)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  20. A Fuzzy Control System for Inductive Video Games

    OpenAIRE

    Lara-Alvarez, Carlos; Mitre-Hernandez, Hugo; Flores, Juan; Fuentes, Maria

    2017-01-01

    It has been shown that the emotional state of students has an important relationship with learning; for instance, engaged concentration is positively correlated with learning. This paper proposes the Inductive Control (IC) for educational games. Unlike conventional approaches that only modify the game level, the proposed technique also induces emotions in the player for supporting the learning process. This paper explores a fuzzy system that analyzes the players' performance and their emotion...

  1. Video Liveness for Citizen Journalism: Attacks and Defenses

    OpenAIRE

    Rahman, Mahmudur; Azimpourkivi, Mozhgan; Topkara, Umut; Carbunar, Bogdan

    2017-01-01

    The impact of citizen journalism raises important video integrity and credibility issues. In this article, we introduce Vamos, the first user transparent video "liveness" verification solution based on video motion, that accommodates the full range of camera movements, and supports videos of arbitrary length. Vamos uses the agreement between video motion and camera movement to corroborate the video authenticity. Vamos can be integrated into any mobile video capture application without requiri...

  2. Action video games and improved attentional control: Disentangling selection- and response-based processes.

    Science.gov (United States)

    Chisholm, Joseph D; Kingstone, Alan

    2015-10-01

    Research has demonstrated that experience with action video games is associated with improvements in a host of cognitive tasks. Evidence from paradigms that assess aspects of attention has suggested that action video game players (AVGPs) possess greater control over the allocation of attentional resources than do non-video-game players (NVGPs). Using a compound search task that teased apart selection- and response-based processes (Duncan, 1985), we required participants to perform an oculomotor capture task in which they made saccades to a uniquely colored target (selection-based process) and then produced a manual directional response based on information within the target (response-based process). We replicated the finding that AVGPs are less susceptible to attentional distraction and, critically, revealed that AVGPs outperform NVGPs on both selection-based and response-based processes. These results not only are consistent with the improved-attentional-control account of AVGP benefits, but they suggest that the benefit of action video game playing extends across the full breadth of attention-mediated stimulus-response processes that impact human performance.

  3. The effects of self-controlled video feedback on the learning of the basketball set shot

    Directory of Open Access Journals (Sweden)

    Christopher Adam Aiken

    2012-09-01

    Full Text Available Allowing learners to control some aspect of instructional support (e.g., augmented feedback appears to facilitate motor skill acquisition. No studies, however, have examined self-controlled (SC video feedback without the provision of additional attentional cueing. The purpose of this study was to extend previous SC research using video feedback about movement form for the basketball set shot without explicitly directing attention to specific aspects of the movement. The SC group requested video feedback of their performance following any trial during the acquisition phase. The yoked (YK group received feedback according to a schedule created by a SC counterpart. During acquisition participants were also allowed to view written instructional cues at any time. Results revealed that the SC group had significantly higher form scores during the transfer phase and utilized the instructional cues more frequently during acquisition. Post-training questionnaire responses indicated no preference for requesting or receiving feedback following good trials as reported by Chiviacowsky and Wulf (2002, 2005. The nature of the task was such that participants could have assigned both positive and negative evaluations to different aspects of the movement during the same trial. Thus, the lack of preferences along with the similarity in scores for feedback and no-feedback trials may simply have reflected this complexity. Importantly, however, the results indicated that SC video feedback conferred a learning benefit without the provision of explicit additional attentional cueing.

  4. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  5. Quality control of radiosurgery: dosimetry with micro camera in spherical mannequin; Control de calidad en radiocirugia: dosimetria con microcamara en maniqui esferico

    Energy Technology Data Exchange (ETDEWEB)

    Casado Villalon, F. J.; Navarro Guirado, F.; Garci Pareja, S.; Benitez Villegas, E. M.; Galan Montenegro, P.; Moreno Saiz, C.

    2013-07-01

    The dosimetry of small field is part of quality control in the treatment of cranial radiosurgery. In this work the results of absorbed dose in the isocenter, Planner, with those obtained from are compared experimentally with a micro-camera into an spherical mannequin. (Author)

  6. Linear array of photodiodes to track a human speaker for video recording

    International Nuclear Information System (INIS)

    DeTone, D; Neal, H; Lougheed, R

    2012-01-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant– the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting–a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  7. Linear array of photodiodes to track a human speaker for video recording

    Science.gov (United States)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  8. Development of gamma camera display phantom for quality control in developing countries

    International Nuclear Information System (INIS)

    Todd-Pokropek, A.

    1981-08-01

    A special phantom suitable for the routine evaluation of ''end-to-end'' gamma camera system performance, that is, system performance from input to output, is described. The design finally adopted, called the ''strip-wedge phantom'' and consisting of an array of copper or aluminium wedges of various thicknesses, permits the evaluation of contrast along one axis and resolution along the other. It is proposed that on acceptance testing of a gamma camera system a series of progressively degraded images should be obtained from the best possible with the system to very poor. An ''action threshold'' should then be defined such that image quality below this threshold would warrant such action as calling in the service engineer. Daily routine images should then be examined with reference to this threshold. Experience with the phantom is summarized

  9. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  10. Guerrilla Video: A New Protocol for Producing Classroom Video

    Science.gov (United States)

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  11. An Innovative Streaming Video System With a Point-of-View Head Camera Transmission of Surgeries to Smartphones and Tablets: An Educational Utility.

    Science.gov (United States)

    Chaves, Rafael Oliveira; de Oliveira, Pedro Armando Valente; Rocha, Luciano Chaves; David, Joacy Pedro Franco; Ferreira, Sanmari Costa; Santos, Alex de Assis Santos Dos; Melo, Rômulo Müller Dos Santos; Yasojima, Edson Yuzur; Brito, Marcus Vinicius Henriques

    2017-10-01

    In order to engage medical students and residents from public health centers to utilize the telemedicine features of surgery on their own smartphones and tablets as an educational tool, an innovative streaming system was developed with the purpose of streaming live footage from open surgeries to smartphones and tablets, allowing the visualization of the surgical field from the surgeon's perspective. The current study aims to describe the results of an evaluation on level 1 of Kirkpatrick's Model for Evaluation of the streaming system usage during gynecological surgeries, based on the perception of medical students and gynecology residents. Consisted of a live video streaming (from the surgeon's point of view) of gynecological surgeries for smartphones and tablets, one for each volunteer. The volunteers were able to connect to the local wireless network, created by the streaming system, through an access password and watch the video transmission on a web browser on their smartphones. Then, they answered a Likert-type questionnaire containing 14 items about the educational applicability of the streaming system, as well as comparing it to watching an in loco procedure. This study is formally approved by the local ethics commission (Certificate No. 53175915.7.0000.5171/2016). Twenty-one volunteers participated, totalizing 294 items answered, in which 94.2% were in agreement with the items affirmative, 4.1% were neutral, and only 1.7% answers corresponded to negative impressions. Cronbach's α was .82, which represents a good reliability level. Spearman's coefficients were highly significant in 4 comparisons and moderately significant in the other 20 comparisons. This study presents a local streaming video system of live surgeries to smartphones and tablets and shows its educational utility, low cost, and simple usage, which offers convenience and satisfactory image resolution, thus being potentially applicable in surgical teaching.

  12. Cognitive rehabilitation of attention deficits in traumatic brain injury using action video games: A controlled trial

    Directory of Open Access Journals (Sweden)

    Alexandra Vakili

    2016-12-01

    Full Text Available This paper investigates the utility and efficacy of a novel eight-week cognitive rehabilitation programme developed to remediate attention deficits in adults who have sustained a traumatic brain injury (TBI, incorporating the use of both action video game playing and a compensatory skills programme. Thirty-one male TBI patients, aged 18–65 years, were recruited from 2 Australian brain injury units and allocated to either a treatment or waitlist (treatment as usual control group. Results showed improvements in the treatment group, but not the waitlist control group, for performance on the immediate trained task (i.e. the video game and in non-trained measures of attention and quality of life. Neither group showed changes to executive behaviours or self-efficacy. The strengths and limitations of the study are discussed, as are the potential applications and future implications of the research.

  13. Authentication Approaches for Standoff Video Surveillance

    International Nuclear Information System (INIS)

    Baldwin, G.; Sweatt, W.; Thomas, M.

    2015-01-01

    Video surveillance for international nuclear safeguards applications requires authentication, which confirms to an inspector reviewing the surveillance images that both the source and the integrity of those images can be trusted. To date, all such authentication approaches originate at the camera. Camera authentication would not suffice for a ''standoff video'' application, where the surveillance camera views an image piped to it from a distant objective lens. Standoff video might be desired in situations where it does not make sense to expose sensitive and costly camera electronics to contamination, radiation, water immersion, or other adverse environments typical of hot cells, reprocessing facilities, and within spent fuel pools, for example. In this paper, we offer optical architectures that introduce a standoff distance of several metres between the scene and camera. Several schemes enable one to authenticate not only that the extended optical path is secure, but also that the scene is being viewed live. They employ optical components with remotely-operated spectral, temporal, directional, and intensity properties that are under the control of the inspector. If permitted by the facility operator, illuminators, reflectors and polarizers placed in the scene offer further possibilities. Any tampering that would insert an alternative image source for the camera, although undetectable with conventional cryptographic authentication of digital camera data, is easily exposed using the approaches we describe. Sandia National Laboratories is a multi-programme laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Support to Sandia National Laboratories provided by the NNSA Next Generation Safeguards Initiative is gratefully acknowledged. SAND2014-3196 A. (author)

  14. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  15. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  16. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  17. Mobile-Based Video Learning Outcomes in Clinical Nursing Skill Education: A Randomized Controlled Trial.

    Science.gov (United States)

    Lee, Nam-Ju; Chae, Sun-Mi; Kim, Haejin; Lee, Ji-Hye; Min, Hyojin Jennifer; Park, Da-Eun

    2016-01-01

    Mobile devices are a regular part of daily life among the younger generations. Thus, now is the time to apply mobile device use to nursing education. The purpose of this study was to identify the effects of a mobile-based video clip on learning motivation, competence, and class satisfaction in nursing students using a randomized controlled trial with a pretest and posttest design. A total of 71 nursing students participated in this study: 36 in the intervention group and 35 in the control group. A video clip of how to perform a urinary catheterization was developed, and the intervention group was able to download it to their own mobile devices for unlimited viewing throughout 1 week. All of the students participated in a practice laboratory to learn urinary catheterization and were blindly tested for their performance skills after participation in the laboratory. The intervention group showed significantly higher levels of learning motivation and class satisfaction than did the control. Of the fundamental nursing competencies, the intervention group was more confident in practicing catheterization than their counterparts. Our findings suggest that video clips using mobile devices are useful tools that educate student nurses on relevant clinical skills and improve learning outcomes.

  18. Influence of control and physical effort on cardiovascular reactivity to a video game task.

    Science.gov (United States)

    Weinstein, Suzanne E; Quigley, Karen S; Mordkoff, J Toby

    2002-09-01

    This study investigated the influences of both perceived control and physical effort on cardiovascular reactivity. Undergraduates (N = 32) played a video game task interrupted by aversive noise. Perceived control of the noise was manipulated by instructions indicating the presence or absence of a contingency between performance and noise presentations. Physical effort was manipulated by controlling the physical force required to perform the task. There was a significant main effect of control on systolic blood pressure (SBP) and total peripheral resistance (TPR), with both increasing more during low than high control conditions. The results suggest that high perceived control over aversive noise in an effortful task reduces SBP and TPR reactivity relative to low perceived control. The results are consistent with the idea that control buffers the reactivity associated with task performance under aversive conditions.

  19. Crowdsourcing HIV Test Promotion Videos: A Noninferiority Randomized Controlled Trial in China.

    Science.gov (United States)

    Tang, Weiming; Han, Larry; Best, John; Zhang, Ye; Mollan, Katie; Kim, Julie; Liu, Fengying; Hudgens, Michael; Bayus, Barry; Terris-Prestholt, Fern; Galler, Sam; Yang, Ligang; Peeling, Rosanna; Volberding, Paul; Ma, Baoli; Xu, Huifang; Yang, Bin; Huang, Shujie; Fenton, Kevin; Wei, Chongyi; Tucker, Joseph D

    2016-06-01

    Crowdsourcing, the process of shifting individual tasks to a large group, may enhance human immunodeficiency virus (HIV) testing interventions. We conducted a noninferiority, randomized controlled trial to compare first-time HIV testing rates among men who have sex with men (MSM) and transgender individuals who received a crowdsourced or a health marketing HIV test promotion video. Seven hundred twenty-one MSM and transgender participants (≥16 years old, never before tested for HIV) were recruited through 3 Chinese MSM Web portals and randomly assigned to 1 of 2 videos. The crowdsourced video was developed using an open contest and formal transparent judging while the evidence-based health marketing video was designed by experts. Study objectives were to measure HIV test uptake within 3 weeks of watching either HIV test promotion video and cost per new HIV test and diagnosis. Overall, 624 of 721 (87%) participants from 31 provinces in 217 Chinese cities completed the study. HIV test uptake was similar between the crowdsourced arm (37% [114/307]) and the health marketing arm (35% [111/317]). The estimated difference between the interventions was 2.1% (95% confidence interval, -5.4% to 9.7%). Among those tested, 31% (69/225) reported a new HIV diagnosis. The crowdsourced intervention cost substantially less than the health marketing intervention per first-time HIV test (US$131 vs US$238 per person) and per new HIV diagnosis (US$415 vs US$799 per person). Our nationwide study demonstrates that crowdsourcing may be an effective tool for improving HIV testing messaging campaigns and could increase community engagement in health campaigns. NCT02248558. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  20. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors

    Directory of Open Access Journals (Sweden)

    Abdelkader Nasreddine Belkacem

    2015-01-01

    Full Text Available EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.

  1. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors.

    Science.gov (United States)

    Belkacem, Abdelkader Nasreddine; Saetia, Supat; Zintus-art, Kalanyu; Shin, Duk; Kambara, Hiroyuki; Yoshimura, Natsue; Berrached, Nasreddine; Koike, Yasuharu

    2015-01-01

    EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.

  2. Distributed Real-Time Embedded Video Processing

    National Research Council Canada - National Science Library

    Lv, Tiehan

    2004-01-01

    .... A deployable multi-camera video system must perform distributed computation, including computation near the camera as well as remote computations, in order to meet performance and power requirements...

  3. Quality control in dual head γ-cameras: comparison between methods and software s used for image analysis

    International Nuclear Information System (INIS)

    Nayl E, A.; Fornasier, M. R.; De Denaro, M.; Sulieman, A.; Alkhorayef, M.; Bradley, D.

    2017-10-01

    Patient radiation dose and image quality are the main issues in nuclear medicine (Nm) procedures. Currently, many protocols are used for image acquisition and analysis of quality control (Qc) tests. National Electrical Manufacturers Association (Nema) methods and protocols are widely accepted method used for providing accurate description, measurement and reporting of γ-camera performance parameters. However, no standard software is available for image analysis. The aim os this study was to compare between the vendor Qc software analysis and three software from different developers downloaded free from internet; NMQC, Nm Tool kit and ImageJ-Nm Tool kit software. The three software are used for image analysis of some Qc tests for γ-cameras based on Nema protocols including non-uniformity evaluation. Ten non-uniformity Qc images were taken from dual head γ-camera (Siemens Symbia) installed in Trieste general hospital (Italy), and analyzed. Excel analysis was used as baseline calculation of the non-uniformity test according Nema procedures. The results of the non-uniformity analysis showed good agreement between the three independent software and excel calculation (the average differences were 0.3%, 2.9%, 1.3% and 1.6% for UFOV integral, UFOV differential, CFOV integral and CFOV differential respectively), while significant difference was detected on the analysis of the company Qc software with compare to the excel analysis (the average differences were 14.6%, 20.7%, 25.7% and 31.9% for UFOV integral, UFOV differential, CFOV integral and CFOV differential respectively). NMQC software was the best in comparison with the excel calculations. The variation in the results is due to different pixel sizes used for analysis in the three software and the γ-camera Qc software. Therefore, is important to perform the tests by the vendor Qc software as well as by independent analysis to understand the differences between the values. Moreover, the medical physicist should know

  4. Quality control in dual head γ-cameras: comparison between methods and software s used for image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nayl E, A. [Sudan Atomic Energy Commission, Radiation Safety Institute, Khartoum (Sudan); Fornasier, M. R.; De Denaro, M. [Azienda Sanitaria Universitaria Integrata di Trieste, Medical Physics Department, Via Giovanni Sai 7, 34128 Trieste (Italy); Sulieman, A. [Prince Sattam bin Abdulaziz University, College of Applied Medical Sciences, Radiology and Medical Imaging Department, P. O. Box 422, 11942 Al-Kharj (Saudi Arabia); Alkhorayef, M.; Bradley, D., E-mail: abdwsh10@hotmail.com [University of Surrey, Department of Physics, GU2-7XH Guildford, Surrey (United Kingdom)

    2017-10-15

    Patient radiation dose and image quality are the main issues in nuclear medicine (Nm) procedures. Currently, many protocols are used for image acquisition and analysis of quality control (Qc) tests. National Electrical Manufacturers Association (Nema) methods and protocols are widely accepted method used for providing accurate description, measurement and reporting of γ-camera performance parameters. However, no standard software is available for image analysis. The aim os this study was to compare between the vendor Qc software analysis and three software from different developers downloaded free from internet; NMQC, Nm Tool kit and ImageJ-Nm Tool kit software. The three software are used for image analysis of some Qc tests for γ-cameras based on Nema protocols including non-uniformity evaluation. Ten non-uniformity Qc images were taken from dual head γ-camera (Siemens Symbia) installed in Trieste general hospital (Italy), and analyzed. Excel analysis was used as baseline calculation of the non-uniformity test according Nema procedures. The results of the non-uniformity analysis showed good agreement between the three independent software and excel calculation (the average differences were 0.3%, 2.9%, 1.3% and 1.6% for UFOV integral, UFOV differential, CFOV integral and CFOV differential respectively), while significant difference was detected on the analysis of the company Qc software with compare to the excel analysis (the average differences were 14.6%, 20.7%, 25.7% and 31.9% for UFOV integral, UFOV differential, CFOV integral and CFOV differential respectively). NMQC software was the best in comparison with the excel calculations. The variation in the results is due to different pixel sizes used for analysis in the three software and the γ-camera Qc software. Therefore, is important to perform the tests by the vendor Qc software as well as by independent analysis to understand the differences between the values. Moreover, the medical physicist should know

  5. A brief report on the relationship between self-control, video game addiction and academic achievement in normal and ADHD students

    OpenAIRE

    Haghbin, Maryam; Shaterian, Fatemeh; Hosseinzadeh, Davood; Griffiths, Mark D.

    2013-01-01

    Background and aims: Over the last two decades, research into video game addiction has grown increasingly. The present research aimed to examine the relationship between video game addiction, self-control, and academic achievement of normal and ADHD high school students. Based on previous research it was hypothesized that (i) there would be a relationship between video game addiction, self-control and academic achievement (ii) video game addiction, self-control and academic achievement would ...

  6. Effects of Peer-Facilitated, Video-Based and Combined Peer-and-Video Education on Anxiety Among Patients Undergoing Coronary Angiography: Randomised controlled trial.

    Science.gov (United States)

    Habibzadeh, Hosein; Milan, Zahra D; Radfar, Moloud; Alilu, Leyla; Cund, Audrey

    2018-02-01

    Coronary angiography can be stressful for patients and anxiety-caused physiological responses during the procedure increase the risk of dysrhythmia, coronary artery spasms and rupture. This study therefore aimed to investigate the effects of peer, video and combined peer-and-video training on anxiety among patients undergoing coronary angiography. This single-blinded randomised controlled clinical trial was conducted at two large educational hospitals in Iran between April and July 2016. A total of 120 adult patients undergoing coronary angiography were recruited. Using a block randomisation method, participants were assigned to one of four groups, with those in the control group receiving no training and those in the three intervention groups receiving either peer-facilitated training, video-based training or a combination of both. A Persian-language validated version of the State-Trait Anxiety Inventory was used to measure pre- and post-intervention anxiety. There were no statistically significant differences in mean pre-intervention anxiety scores between the four groups (F = 0.31; P = 0.81). In contrast, there was a significant reduction in post-intervention anxiety among all three intervention groups compared to the control group (F = 27.71; P <0.01); however, there was no significant difference in anxiety level in terms of the type of intervention used. Peer, video and combined peer-and-video education were equally effective in reducing angiography-related patient anxiety. Such techniques are recommended to reduce anxiety amongst patients undergoing coronary angiography in hospitals in Iran.

  7. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    Science.gov (United States)

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  8. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  9. Novel computer-based endoscopic camera

    Science.gov (United States)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  10. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  11. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    Science.gov (United States)

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  12. Sending Safety Video over WiMAX in Vehicle Communications

    Directory of Open Access Journals (Sweden)

    Jun Steed Huang

    2013-10-01

    Full Text Available This paper reports on the design of an OPNET simulation platform to test the performance of sending real-time safety video over VANET (Vehicular Adhoc NETwork using the WiMAX technology. To provide a more realistic environment for streaming real-time video, a video model was created based on the study of video traffic traces captured from a realistic vehicular camera, and different design considerations were taken into account. A practical controller over real-time streaming protocol is implemented to control data traffic congestion for future road safety development. Our driving video model was then integrated with the WiMAX OPNET model along with a mobility model based on real road maps. Using this simulation platform, different mobility cases have been studied and the performance evaluated in terms of end-to-end delay, jitter and visual experience.

  13. Playing to your skills: a randomised controlled trial evaluating a dedicated video game for minimally invasive surgery.

    Science.gov (United States)

    Harrington, Cuan M; Chaitanya, Vishwa; Dicker, Patrick; Traynor, Oscar; Kavanagh, Dara O

    2018-02-14

    Video gaming demands elements of visual attention, hand-eye coordination and depth perception which may be contiguous with laparoscopic skill development. General video gaming has demonstrated altered cortical plasticity and improved baseline/acquisition of minimally invasive skills. The present study aimed to evaluate for skill acquisition associated with a commercially available dedicated laparoscopic video game (Underground) and its unique (laparoscopic-like) controller for the Nintendo®Wii U™ console. This single-blinded randomised controlled study was conducted with laparoscopically naive student volunteers of limited (Virtual Reality (VR) simulator (LAP Mentor TM , 3D systems, Colorado, USA). Twenty participants were randomised to two groups; Group A was requested to complete 5 h of video gaming (Underground) per week and Group B to avoid gaming beyond their normal frequency. After 4 weeks participants were reassessed using the same VR tasks. Changes in simulator performances were assessed for each group and for intergroup variances using mixed model regression. Significant inter- and intragroup performances were present for the video gaming and controls across four basic tasks. The video gaming group demonstrated significant improvements in thirty-one of the metrics examined including dominant (p ≤ 0.004) and non-dominant (p entertainment distractions (11.1%). Our work revealed significant value in training using a dedicated laparoscopic video game for acquisition of virtual laparoscopic skills. This novel serious game may provide foundations for future surgical developments on game consoles in the home environment.

  14. Fuzzy Logic Control of Adaptive ARQ for Video Distribution over a Bluetooth Wireless Link

    Directory of Open Access Journals (Sweden)

    R. Razavi

    2007-01-01

    Full Text Available Bluetooth's default automatic repeat request (ARQ scheme is not suited to video distribution resulting in missed display and decoded deadlines. Adaptive ARQ with active discard of expired packets from the send buffer is an alternative approach. However, even with the addition of cross-layer adaptation to picture-type packet importance, ARQ is not ideal in conditions of a deteriorating RF channel. The paper presents fuzzy logic control of ARQ, based on send buffer fullness and the head-of-line packet's deadline. The advantage of the fuzzy logic approach, which also scales its output according to picture type importance, is that the impact of delay can be directly introduced to the model, causing retransmissions to be reduced compared to all other schemes. The scheme considers both the delay constraints of the video stream and at the same time avoids send buffer overflow. Tests explore a variety of Bluetooth send buffer sizes and channel conditions. For adverse channel conditions and buffer size, the tests show an improvement of at least 4 dB in video quality compared to nonfuzzy schemes. The scheme can be applied to any codec with I-, P-, and (possibly B-slices by inspection of packet headers without the need for encoder intervention.

  15. Effects of a Video on Organ Donation Consent Among Primary Care Patients: A Randomized Controlled Trial.

    Science.gov (United States)

    Thornton, J Daryl; Sullivan, Catherine; Albert, Jeffrey M; Cedeño, Maria; Patrick, Bridget; Pencak, Julie; Wong, Kristine A; Allen, Margaret D; Kimble, Linda; Mekesa, Heather; Bowen, Gordon; Sehgal, Ashwini R

    2016-08-01

    Low organ donation rates remain a major barrier to organ transplantation. We aimed to determine the effect of a video and patient cueing on organ donation consent among patients meeting with their primary care provider. This was a randomized controlled trial between February 2013 and May 2014. The waiting rooms of 18 primary care clinics of a medical system in Cuyahoga County, Ohio. The study included 915 patients over 15.5 years of age who had not previously consented to organ donation. Just prior to their clinical encounter, intervention patients (n = 456) watched a 5-minute organ donation video on iPads and then choose a question regarding organ donation to ask their provider. Control patients (n = 459) visited their provider per usual routine. The primary outcome was the proportion of patients who consented for organ donation. Secondary outcomes included the proportion of patients who discussed organ donation with their provider and the proportion who were satisfied with the time spent with their provider during the clinical encounter. Intervention patients were more likely than control patients to consent to donate organs (22 % vs. 15 %, OR 1.50, 95%CI 1.10-2.13). Intervention patients were also more likely to have donation discussions with their provider (77 % vs. 18 %, OR 15.1, 95%CI 11.1-20.6). Intervention and control patients were similarly satisfied with the time they spent with their provider (83 % vs. 86 %, OR 0.87, 95%CI 0.61-1.25). How the observed increases in organ donation consent might translate into a greater organ supply is unclear. Watching a brief video regarding organ donation and being cued to ask a primary care provider a question about donation resulted in more organ donation discussions and an increase in organ donation consent. Satisfaction with the time spent during the clinical encounter was not affected. clinicaltrials.gov Identifier: NCT01697137.

  16. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  17. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  18. Scintigraphic acquisition entropy (2). A new approach in the quality control of the scintillation camera performances

    International Nuclear Information System (INIS)

    Elloumi, I.; Bouhdima, M.S.

    2002-01-01

    A new approach in the survey of the performances of gamma camera based on the entropy associated to the scintigraphic acquisition is presented. We take into account the sensitivity, the variation of the collimator response in function of the depth, the uncertainty on the number of counts, the multiplex effect and the spatial uncertainty. This entropy function is expressed in function of all the acquisition parameters: intrinsic crystal resolution, collimator characteristics, emitter object parameters and the source activity. The application of this method to the study of the influence of the collimation shows that the entropy associated to a collimator permits a best appreciation of the quality of the acquisition and therefore a better analysis of collimator performances. Likewise, the evolution of the entropy associated to the acquisition of a uniform source image is in agreement with the variation of the quality of image histogram. One shows, thus, that nor the spatial resolution, nor the sensitivity and nor the signal to noise ratio are able detect a variation of the image quality, when analysed one by one. (author)

  19. A new lunar absolute control point: established by images from the landing camera on Chang'e-3

    International Nuclear Information System (INIS)

    Wang Fen-Fei; Liu Jian-Jun; Li Chun-Lai; Ren Xin; Mu Ling-Li; Yan Wei; Wang Wen-Rui; Xiao Jing-Tao; Tan Xu; Zhang Xiao-Xia; Zou Xiao-Duan; Gao Xing-Ye

    2014-01-01

    The establishment of a lunar control network is one of the core tasks in selenodesy, in which defining an absolute control point on the Moon is the most important step. However, up to now, the number of absolute control points has been very sparse. These absolute control points have mainly been lunar laser ranging retroreflectors, whose geographical location can be observed by observations on Earth and also identified in high resolution lunar satellite images. The Chang'e-3 (CE-3) probe successfully landed on the Moon, and its geographical location has been monitored by an observing station on Earth. Since its positional accuracy is expected to reach the meter level, the CE-3 landing site can become a new high precision absolute control point. We use a sequence of images taken from the landing camera, as well as satellite images taken by CE-1 and CE-2, to identify the location of the CE-3 lander. With its geographical location known, the CE-3 landing site can be established as a new absolute control point, which will effectively expand the current area of the lunar absolute control network by 22%, and can greatly facilitate future research in the field of lunar surveying and mapping, as well as selenodesy

  20. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  1. Parent-child relationships and self‑control in male university students' desire to play video games.

    Science.gov (United States)

    Karbasizadeh, Sina; Jani, Masih; Keshvari, Mahtab

    2018-06-12

    To determine the relationship between the parent-child relationship, self-control and demographic characteristics and the desire to play video games among male university students at one university in Iran. This was a correlational, descriptive, applied study. A total of 103 male students were selected randomly as a study sample from the population of male students at Isfahan University in Iran. Data collection tools used were the Video Games Questionnaire, Tanji's Self-Control Scale, Parent-Child Relationship Questionnaire, and Demographic Questionnaire. Data were analysed using stepwise regression analysis. This study found several factors increased male students' desire to play video games. Demographic characteristics associated with increased tendency to play video games among male students in Iran are older age, larger number of family members, lower parental level of education and higher socio-economic class, while other significant factors are a lower level of self‑control and a poorer parent-child relationship. PARTICIPANTS': higher socio-economic class, lower level of self-control and older age explained 8.2%, 5.2% and 5.9% of their desire to play video games, respectively. These three variables together accounted for significantly 16.9% of a male student's desire to play video games in this study ( P video games in Iran. Moreover, lower levels of self-control and a poorer parent-child relationship were found to be accompanied by a greater desire to play video games among male university students. © 2018 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.

  2. Using camera traps and digital video to investigate the impact of Aethina tumida pest on honey bee (Apis mellifera adansonii reproduction and ability to keep away elephants (Loxodonta africana cyclotis in Gamba, Gabon

    Directory of Open Access Journals (Sweden)

    Steeve Ngama

    2018-06-01

    Full Text Available Bees and elephant interactions are the core of a conservation curiosity since it has been demonstrated that bees, one of the smallest domesticated animals, can keep away elephants, the largest terrestrial animals. Yet, insects' parasites can impact the fitness and activity of the bees. Since their activity is critical to the repellent ability against elephants, this study assessed the impact of small hive beetles (Aethina tumida on bee (Apis mellifera adansonii reproduction and ability to keep forest elephants (Loxodonta africana cyclotis away. Because interspecies interactions are not easy to investigate, we have used camera traps and digital video to observe the activity of bees and their interactions with wild forest elephants under varying conditions of hive infestation with the small hive beetle, a common bee pest. Our results show that queen cells are good visual indicators of colony efficiency on keeping away forest elephants. We give evidences that small hive beetles are equivalently present in large and small bee colonies. Yet, results show no worries about the use of bees as elephant deterrents because of parasitism due to small hive beetles. Apis mellifera adansonii bees seem to effectively cope with small hive beetles showing no significant influence on its reproduction and ability to keep elephants away. This study also reports for the first time the presence of Aethina tumida as a constant beekeeping pest that needs to be addressed in Gabon.

  3. Randomized controlled trial of video self-modeling following speech restructuring treatment for stuttering.

    Science.gov (United States)

    Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark

    2010-08-01

    In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech restructuring treatment. Post treatment, participants were randomly assigned to 2 trial arms: standard maintenance and standard maintenance plus VSM. Participants in the latter arm viewed stutter-free videos of themselves each day for 1 month. The addition of VSM did not improve speech outcomes, as measured by percent syllables stuttered, at either 1 or 6 months postrandomization. However, at the latter assessment, self-rating of worst stuttering severity by the VSM group was 10% better than that of the control group, and satisfaction with speech fluency was 20% better. Quality of life was also better for the VSM group, which was mildly to moderately impaired compared with moderate impairment in the control group. VSM intervention after treatment was associated with improvements in self-reported outcomes. The clinical implications of this finding are discussed.

  4. Effects of reducing children's television and video game use on aggressive behavior: a randomized controlled trial.

    Science.gov (United States)

    Robinson, T N; Wilde, M L; Navracruz, L C; Haydel, K F; Varady, A

    2001-01-01

    The relationship between exposure to aggression in the media and children's aggressive behavior is well documented. However, few potential solutions have been evaluated. To assess the effects of reducing television, videotape, and video game use on aggressive behavior and perceptions of a mean and scary world. Randomized, controlled, school-based trial. Two sociodemographically and scholastically matched public elementary schools in San Jose, Calif. Third- and fourth-grade students (mean age, 8.9 years) and their parents or guardians. Children in one elementary school received an 18-lesson, 6-month classroom curriculum to reduce television, videotape, and video game use. In September (preintervention) and April (postintervention) of a single school year, children rated their peers' aggressive behavior and reported their perceptions of the world as a mean and scary place. A 60% random sample of children were observed for physical and verbal aggression on the playground. Parents were interviewed by telephone and reported aggressive and delinquent behaviors on the child behavior checklist. The primary outcome measure was peer ratings of aggressive behavior. Compared with controls, children in the intervention group had statistically significant decreases in peer ratings of aggression (adjusted mean difference, -2.4%; 95% confidence interval [CI], -4.6 to -0.2; P =.03) and observed verbal aggression (adjusted mean difference, -0.10 act per minute per child; 95% CI, -0.18 to -0.03; P =.01). Differences in observed physical aggression, parent reports of aggressive behavior, and perceptions of a mean and scary world were not statistically significant but favored the intervention group. An intervention to reduce television, videotape, and video game use decreases aggressive behavior in elementary schoolchildren. These findings support the causal influences of these media on aggression and the potential benefits of reducing children's media use.

  5. Effects of music and music video interventions on sleep quality: A randomized controlled trial in adults with sleep disturbances.

    Science.gov (United States)

    Huang, Chiung-Yu; Chang, En-Ting; Hsieh, Yuan-Mei; Lai, Hui-Ling

    2017-10-01

    The present study aimed to compare the effects of music and music video interventions on objective and subjective sleep quality in adults with sleep disturbances. A randomized controlled trial was performed on 71 adults who were recruited from the outpatient department of a hospital with 1100 beds and randomly assigned to the control, music, and music video groups. During the 4 test days (Days 2-5), for 30min before nocturnal sleep, the music group listened to Buddhist music and the music video group watched Buddhist music videos. They were instructed to not listen/watch to the music/MV on the first night (pretest, Day 1) and the final night (Day 6). The control group received no intervention. Sleep was assessed using a one-channel electroencephalography machine in their homes and self-reported questionnaires. The music and music video interventions had no effect on any objective sleep parameters, as measured using electroencephalography. However, the music group had significantly longer subjective total sleep time than the music video group did (Wald χ 2 =6.23, p=0.04). Our study results increase knowledge regarding music interventions for sleep quality in adults with sleep disturbances. This study suggested that more research is required to strengthen the scientific knowledge of the effects of music intervention on sleep quality in adults with sleep disturbances. (ISRCTN94971645). Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. A case-control study of wicket spikes using video-EEG monitoring.

    Science.gov (United States)

    Vallabhaneni, Maya; Baldassari, Laura E; Scribner, James T; Cho, Yong Won; Motamedi, Gholam K

    2013-01-01

    To investigate clinical characteristics associated with wicket spikes in patients undergoing long-term video-EEG monitoring. A case-control study was performed in 479 patients undergoing video-EEG monitoring, with 3 age- (±3 years) and gender-matched controls per patient with wicket spikes. Logistic regression was utilized to investigate the association between wicket spikes and other factors, including conditions that have been previously associated with wicket spikes. Wicket spikes were recorded in 48 patients. There was a significantly higher prevalence of dizziness/vertigo (p=0.002), headaches (p=0.005), migraine (p=0.015), and seizures (p=0.016) in patients with wickets. The majority of patients with wicket spikes did not exhibit epileptiform activity on EEG; however, patients with history of seizures were more likely to have wickets (p=0.017). There was no significant difference in the prevalence of psychogenic non-epileptic seizures between the groups. Wickets were more common on the left, during sleep, and more likely to be first recorded on day 1-2 of monitoring. Patients with wicket spikes are more likely to have dizziness/vertigo, headaches, migraine, and seizures. Patients with history of seizures are more likely to have wickets. The prevalence of psychogenic non-epileptic seizures is not significantly higher in patients with wickets. Copyright © 2012 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  7. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  8. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  9. A controlled pilot trial of two commercial video games for rehabilitation of arm function after stroke.

    Science.gov (United States)

    Chen, Mei-Hsiang; Huang, Lan-Ling; Lee, Chang-Franw; Hsieh, Ching-Lin; Lin, Yu-Chao; Liu, Hsiuchih; Chen, Ming-I; Lu, Wen-Shian

    2015-07-01

    To investigate the acceptability and potential efficacy of two commercial video games for improving upper extremity function after stroke in order to inform future sample size and study design. A controlled clinical trial design using sequential allocation into groups. A clinical occupational therapy department. Twenty-four first-stroke patients. Patients were assigned to one of three groups: conventional group, Wii group, and XaviX group. In addition to regular one-hour conventional rehabilitation, each group received an additional half-hour of upper extremity exercises via conventional devices, Wii games, or XaviX games, for eight weeks. The Fugl-Meyer Assessment of motor function, Box and Block Test of Manual Dexterity, Functional Independence Measure, and upper extremity range of motion were used at baseline and postintervention. Also, a questionnaire was used to assess motivation and enjoyment. The effect size of differences in change scores between the Wii and conventional groups ranged from 0.71 (SD 0.59) to 0.28 (SD 0.58), on the Fugl-Meyer Assessment of motor function (d = 0.74) was larger than that between the XaviX and conventional groups, ranged from 0.44 (SD 0.49) to 0.28 (SD 0.58) (d = 0.30). Patient enjoyment was significantly greater in the video game groups (Wii mean 4.25, SD 0.89; XaviX mean 4.38, SD 0.52) than in the conventional group (mean 2.25, SD 0.89, F = 18.55, p video games in rehabilitation. A sample size of 72 patients (24 per group) would be appropriate for a full study. © The Author(s) 2014.

  10. Patterned Video Sensors For Low Vision

    Science.gov (United States)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  11. Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing

    Directory of Open Access Journals (Sweden)

    Yunsick Sung

    2018-03-01

    Full Text Available Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS, a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images

  12. Do Motion Controllers Make Action Video Games Less Sedentary? A Randomized Experiment

    Science.gov (United States)

    Lyons, Elizabeth J.; Tate, Deborah F.; Ward, Dianne S.; Ribisl, Kurt M.; Bowling, J. Michael; Kalyanaraman, Sriram

    2012-01-01

    Sports- and fitness-themed video games using motion controllers have been found to produce physical activity. It is possible that motion controllers may also enhance energy expenditure when applied to more sedentary games such as action games. Young adults (N = 100) were randomized to play three games using either motion-based or traditional controllers. No main effect was found for controller or game pair (P > .12). An interaction was found such that in one pair, motion control (mean [SD] 0.96 [0.20] kcal · kg−1 · hr−1) produced 0.10 kcal · kg−1 · hr−1 (95% confidence interval 0.03 to 0.17) greater energy expenditure than traditional control (0.86 [0.17] kcal · kg−1 · hr−1, P = .048). All games were sedentary. As currently implemented, motion control is unlikely to produce moderate intensity physical activity in action games. However, some games produce small but significant increases in energy expenditure, which may benefit health by decreasing sedentary behavior. PMID:22028959

  13. Do Motion Controllers Make Action Video Games Less Sedentary? A Randomized Experiment

    Directory of Open Access Journals (Sweden)

    Elizabeth J. Lyons

    2012-01-01

    Full Text Available Sports- and fitness-themed video games using motion controllers have been found to produce physical activity. It is possible that motion controllers may also enhance energy expenditure when applied to more sedentary games such as action games. Young adults (N = 100 were randomized to play three games using either motion-based or traditional controllers. No main effect was found for controller or game pair (P > .12. An interaction was found such that in one pair, motion control (mean [SD] 0.96 [0.20] kcal ⋅ kg-1 ⋅ hr-1 produced 0.10 kcal ⋅ kg-1 ⋅ hr-1 (95% confidence interval 0.03 to 0.17 greater energy expenditure than traditional control (0.86 [0.17] kcal ⋅ kg-1 ⋅ hr-1, P = .048. All games were sedentary. As currently implemented, motion control is unlikely to produce moderate intensity physical activity in action games. However, some games produce small but significant increases in energy expenditure, which may benefit health by decreasing sedentary behavior.

  14. Do motion controllers make action video games less sedentary? A randomized experiment.

    Science.gov (United States)

    Lyons, Elizabeth J; Tate, Deborah F; Ward, Dianne S; Ribisl, Kurt M; Bowling, J Michael; Kalyanaraman, Sriram

    2012-01-01

    Sports- and fitness-themed video games using motion controllers have been found to produce physical activity. It is possible that motion controllers may also enhance energy expenditure when applied to more sedentary games such as action games. Young adults (N = 100) were randomized to play three games using either motion-based or traditional controllers. No main effect was found for controller or game pair (P > .12). An interaction was found such that in one pair, motion control (mean [SD] 0.96 [0.20] kcal · kg(-1) · hr(-1)) produced 0.10 kcal · kg(-1) · hr(-1) (95% confidence interval 0.03 to 0.17) greater energy expenditure than traditional control (0.86 [0.17] kcal · kg(-1) · hr(-1), P = .048). All games were sedentary. As currently implemented, motion control is unlikely to produce moderate intensity physical activity in action games. However, some games produce small but significant increases in energy expenditure, which may benefit health by decreasing sedentary behavior.

  15. Design considerations to improve cognitive ergonomic issues of unmanned vehicle interfaces utilizing video game controllers.

    Science.gov (United States)

    Oppold, P; Rupp, M; Mouloua, M; Hancock, P A; Martin, J

    2012-01-01

    Unmanned (UAVs, UCAVs, and UGVs) systems still have major human factors and ergonomic challenges related to the effective design of their control interface systems, crucial to their efficient operation, maintenance, and safety. Unmanned system interfaces with a human centered approach promote intuitive interfaces that are easier to learn, and reduce human errors and other cognitive ergonomic issues with interface design. Automation has shifted workload from physical to cognitive, thus control interfaces for unmanned systems need to reduce mental workload on the operators and facilitate the interaction between vehicle and operator. Two-handed video game controllers provide wide usability within the overall population, prior exposure for new operators, and a variety of interface complexity levels to match the complexity level of the task and reduce cognitive load. This paper categorizes and provides taxonomy for 121 haptic interfaces from the entertainment industry that can be utilized as control interfaces for unmanned systems. Five categories of controllers were based on the complexity of the buttons, control pads, joysticks, and switches on the controller. This allows the selection of the level of complexity needed for a specific task without creating an entirely new design or utilizing an overly complex design.

  16. Self-control over combined video feedback and modeling facilitates motor learning.

    Science.gov (United States)

    Post, Phillip G; Aiken, Christopher A; Laughlin, David D; Fairbrother, Jeffrey T

    2016-06-01

    Allowing learners to control the video presentation of knowledge of performance (KP) or an expert model during practice has been shown to facilitate motor learning (Aiken, Fairbrother, & Post, 2012; Wulf, Raupach, & Pfeiffer, 2005). Split-screen replay features now allow for the simultaneous presentation of these modes of instructional support. It is uncertain, however, if such a combination incorporated into a self-control protocol would yield similar benefits seen in earlier self-control studies. Therefore, the purpose of the present study was to examine the effects of self-controlled split-screen replay on the learning of a golf chip shot. Participants completed 60 practice trials, three administrations of the Intrinsic Motivation Inventory, and a questionnaire on day one. Retention and transfer tests and a final motivation inventory were completed on day two. Results revealed significantly higher form and accuracy scores for the self-control group during transfer. The self-control group also had significantly higher scores on the perceived competence subscale, reported requesting feedback mostly after perceived poor trials, and recalled a greater number of critical task features compared to the yoked group. The findings for the performance measures were consistent with previous self-control research. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Internet Protocol Display Sharing Solution for Mission Control Center Video System

    Science.gov (United States)

    Brown, Michael A.

    2009-01-01

    With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole

  18. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  19. Video Game Intervention for Sexual Risk Reduction in Minority Adolescents: Randomized Controlled Trial.

    Science.gov (United States)

    Fiellin, Lynn E; Hieftje, Kimberly D; Pendergrass, Tyra M; Kyriakides, Tassos C; Duncan, Lindsay R; Dziura, James D; Sawyer, Benjamin G; Mayes, Linda; Crusto, Cindy A; Forsyth, Brian Wc; Fiellin, David A

    2017-09-18

    Human immunodeficiency virus (HIV) disproportionately impacts minority youth. Interventions to decrease HIV sexual risk are needed. We hypothesized that an engaging theory-based digital health intervention in the form of an interactive video game would improve sexual health outcomes in adolescents. Participants aged 11 to 14 years from 12 community afterschool, school, and summer programs were randomized 1:1 to play up to 16 hours of an experimental video game or control video games over 6 weeks. Assessments were conducted at 6 weeks and at 3, 6, and 12 months. Primary outcome was delay of initiation of vaginal/anal intercourse. Secondary outcomes included sexual health attitudes, knowledge, and intentions. We examined outcomes by gender and age. A total of 333 participants were randomized to play the intervention (n=166) or control games (n=167): 295 (88.6%) were racial/ethnic minorities, 177 (53.2%) were boys, and the mean age was 12.9 (1.1) years. At 12 months, for the 258 (84.6%) participants with available data, 94.6% (122/129) in the intervention group versus 95.4% (123/129) in the control group delayed initiation of intercourse (relative risk=0.99, 95% CI 0.94-1.05, P=.77). Over 12 months, the intervention group demonstrated improved sexual health attitudes overall compared to the control group (least squares means [LS means] difference 0.37, 95% CI 0.01-0.72, P=.04). This improvement was observed in boys (LS means difference 0.67, P=.008), but not girls (LS means difference 0.06, P=.81), and in younger (LS means difference 0.71, P=.005), but not older participants (LS means difference 0.03, P=.92). The intervention group also demonstrated increased sexual health knowledge overall (LS means difference 1.13, 95% CI 0.64-1.61, Pvideo game intervention improves sexual health attitudes and knowledge in minority adolescents for at least 12 months. Clinicaltrials.gov NCT01666496; https://clinicaltrials.gov/ct2/show/NCT01666496 (Archived by WebCite at http

  20. Design of a phantom multitrous for a gamma camera quality control

    International Nuclear Information System (INIS)

    Ben Krir, Wafa; Ben Ameur, Narjes

    2009-01-01

    In this study we presented the technique of scintigraphy in its various theoretical and practical aspects. We have also shown the importance the quality control procedure according to international standards, as NEMA. Starting from different phantoms currently used, developed according to standards, we designed our phantom. On the other part, this implementation has helped to highlight our expectations in Concerning the functionality of the phantom. Indeed, these results were very conclusive since they made it possible to make a very fast cost and quality control without ambiguity lower. We have thus proved the very advanced stage of reliability of our phantom.

  1. Soft x-ray camera for internal shape and current density measurements on a noncircular tokamak

    International Nuclear Information System (INIS)

    Fonck, R.J.; Jaehnig, K.P.; Powell, E.T.; Reusch, M.; Roney, P.; Simon, M.P.

    1988-05-01

    Soft x-ray measurements of the internal plasma flux surface shaped in principle allow a determination of the plasma current density distribution, and provide a necessary monitor of the degree of internal elongation of tokamak plasmas with a noncircular cross section. A two-dimensional, tangentially viewing, soft x-ray pinhole camera has been fabricated to provide internal shape measurements on the PBX-M tokamak. It consists of a scintillator at the focal plane of a foil-filtered pinhole camera, which is, in turn, fiber optically coupled to an intensified framing video camera (/DELTA/t />=/ 3 msec). Automated data acquisition is performed on a stand-alone image-processing system, and data archiving and retrieval takes place on an optical disk video recorder. The entire diagnostic is controlled via a PDP-11/73 microcomputer. The derivation of the polodial emission distribution from the measured image is done by fitting to model profiles. 10 refs., 4 figs

  2. Utilising advance care planning videos to empower perioperative cancer patients and families: a study protocol of a randomised controlled trial.

    Science.gov (United States)

    Aslakson, Rebecca A; Isenberg, Sarina R; Crossnohere, Norah L; Conca-Cheng, Alison M; Yang, Ting; Weiss, Matthew; Volandes, Angelo E; Bridges, John F P; Roter, Debra L

    2017-06-06

    Despite positive health outcomes associated with advance care planning (ACP), little research has investigated the impact of ACP in surgical populations. Our goal is to evaluate how an ACP intervention video impacts the patient centredness and ACP of the patient-surgeon conversation during the presurgical consent visit. We hypothesise that patients who view the intervention will engage in a more patient-centred communication with their surgeons compared with patients who view a control video. Randomised controlled superiority trial of an ACP video with two study arms (intervention ACP video and control video) and four visits (baseline, presurgical consent, postoperative 1 week and postoperative 1 month). Surgeons, patients, principal investigator and analysts are blinded to the randomisation assignment. Single, academic, inner city and tertiary care hospital. Data collection began July 16, 2015 and continues to March 2017. Patients recruited from nine surgical oncology clinics who are undergoing major cancer surgery. In the intervention arm, patients view a patient preparedness video developed through extensive engagement with patients, surgeons and other stakeholders. Patients randomised to the control arm viewed an informational video about the hospital surgical programme. Primary Outcome: Patient centredness and ACP of patient-surgeon conversations during the presurgical consent visit as measured through the Roter Interaction Analysis System. patient Hospital Anxiety and Depression Scale score; patient goals of care; patient, companion and surgeon satisfaction; video helpfulness; medical decision maker designation; and the frequency patients watch the video. Intent-to-treat analysis will be used to assess the impact of video assignment on outcomes. Sensitivity analyses will assess whether there are differential effects contingent on patient or surgeon characteristics. This study has been approved by the Johns Hopkins School of Medicine institutional review

  3. Video-feedback intervention increases sensitive parenting in ethnic minority mothers: a randomized control trial.

    Science.gov (United States)

    Yagmur, Sengul; Mesman, Judi; Malda, Maike; Bakermans-Kranenburg, Marian J; Ekmekci, Hatice

    2014-01-01

    Using a randomized control trial design we tested the effectiveness of a culturally sensitive adaptation of the Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) in a sample of 76 Turkish minority families in the Netherlands. The VIPP-SD was adapted based on a pilot with feedback of the target mothers, resulting in the VIPP-TM (VIPP-Turkish Minorities). The sample included families with 20-47-month-old children with high levels of externalizing problems. Maternal sensitivity, nonintrusiveness, and discipline strategies were observed during pretest and posttest home visits. The VIPP-TM was effective in increasing maternal sensitivity and nonintrusiveness, but not in enhancing discipline strategies. Applying newly learned sensitivity skills in discipline situations may take more time, especially in a cultural context that favors more authoritarian strategies. We conclude that the VIPP-SD program and its video-feedback approach can be successfully applied in immigrant families with a non-Western cultural background, with demonstrated effects on parenting sensitivity and nonintrusiveness.

  4. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  5. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  6. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  7. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  8. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  9. An Intelligent Automated Door Control System Based on a Smart Camera

    Directory of Open Access Journals (Sweden)

    Jiann-Jone Chen

    2013-05-01

    Full Text Available This paper presents an innovative access control system, based on human detection and path analysis, to reduce false automatic door system actions while increasing the added values for security applications. The proposed system can first identify a person from the scene, and track his trajectory to predict his intention for accessing the entrance, and finally activate the door accordingly. The experimental results show that the proposed system has the advantages of high precision, safety, reliability, and can be responsive to demands, while preserving the benefits of being low cost and high added value.

  10. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Kumar, R.

    2002-01-01

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  11. High-performance dual-speed CCD camera system for scientific imaging

    Science.gov (United States)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  12. Investigating the illusion of control in mildly depressed and nondepressed individuals during video-poker play.

    Science.gov (United States)

    Dannewitz, Holly; Weatherly, Jeffrey N

    2007-05-01

    Cognitive fallacies, such as the illusion of control, and psychological disorders, such as depression, may perpetuate gambling and thus contribute to problem gambling (e.g., R. Ladouceur, C. Sylvan, C. Boutin, & C. Doucet, 2002). Gender differences may exist across these variables (e.g., N. M. Petry, 2005). The authors investigated these possibilities by recruiting mildly depressed and nondepressed individuals to play jacks or better, 5-card draw, video poker. Across three poker sessions, participants were given (a) no choice of which cards to play, (b) information on the best cards to play but control over which cards were played, or (c) no information and complete control over which cards were played. The total amount of money gambled increased as control over the game decreased, but this result correlated with an increase in the rate of play. Depressed and nondepressed participants did not differ in how they gambled, but men gambled significantly more and sometimes made more mistakes during play than did women. These results question the role of the illusion of control and depression in perpetuating gambling. They also suggest that providing players information about which cards to play may indirectly promote gambling and provide insight as to why men are more prone to suffer from gambling problems than are women.

  13. Psychological, behavioral, and clinical effects of intra-oral camera: a randomized control trial on adults with gingivitis.

    Science.gov (United States)

    Araújo, Mário-Rui; Alvarez, Maria-João; Godinho, Cristina A; Pereira, Cícero

    2016-12-01

    To evaluate the effects of using an intra-oral camera (IOC) during supportive periodontal therapy (SPT), on the psychological, behavioral, and clinical parameters of patients with gingivitis, outlined by evidence and a theory-based framework. A group of 78 adult patients with gingivitis receiving an SPT was randomized into two groups: IOC and control. Bleeding on Marginal Probing (BOMP), self-reported dental hygiene behaviors, and psychological determinants of behavior change (outcome expectancies, self-efficacy, and planning) and IOC opinion were evaluated 1 week before or during the appointment and 4 months later. Repeated-measures anova was used to compare groups over time. Almost all the patients brushed their teeth daily, while 78% either never or hardly ever used dental floss. The IOC group showed significant improvements in BOMP index (P < 0.001), self-reported flossing (P < 0.05), and self-efficacy (P < 0.05) compared to the control group. The use of IOC significantly improves clinical, behavioral, and psychological determinants of periodontal health 4 months after treatment. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Perceived control in rhesus monkeys (Macaca mulatta) - Enhanced video-task performance

    Science.gov (United States)

    Washburn, David A.; Hopkins, William D.; Rumbaugh, Duane M.

    1991-01-01

    This investigation was designed to determine whether perceived control effects found in humans extend to rhesus monkeys (Macaca mulatta) tested in a video-task format, using a computer-generated menu program, SELECT. Choosing one of the options in SELECT resulted in presentation of five trials of a corresponding task and subsequent return to the menu. In Experiments 1-3, the animals exhibited stable, meaningful response patterns in this task (i.e., they made choices). In Experiment 4, performance on tasks that were selected by the animals significantly exceeded performance on identical tasks when assigned by the experimenter under comparable conditions (e.g., time of day, order, variety). The reliable and significant advantage for performance on selected tasks, typically found in humans, suggests that rhesus monkeys were able to perceive the availability of choices.

  15. A Randomized Controlled Trial of Video Education versus Skill Demonstration: Which Is More Effective in Teaching Sterile Surgical Technique?

    Science.gov (United States)

    Pilieci, Stephanie N; Salim, Saad Y; Heffernan, Daithi S; Itani, Kamal M F; Khadaroo, Rachel G

    2018-04-01

    Video education has many advantages over traditional education including efficiency, convenience, and individualized learning. Learning sterile surgical technique (SST) is imperative for medical students, because proper technique helps prevent surgical site infections (SSIs). We hypothesize that video education is at least as effective as traditional skill demonstration in teaching first-year medical students SST. A video series was created to demonstrate SST ( https://www.youtube.com/playlist?list=PLcRU-gvOmxE2mwMWkowouBkxGXkLZ8Uis ). A randomized controlled trial was designed to assess which education method best teaches SST: video education or skill demonstration. First-year medical students (n = 129) were consented and randomly assigned into two groups: those who attended a skill demonstration (control group; n = 70) and those who watched the video series (experimental group; n = 59). The control group attended a pre-existing 90-minute nurse educator-led skill demonstration. Participants then completed a 30-item multiple choice quiz to test their knowledge. Each group then received the alternate education method and completed a 23-item follow-up survey to determine their preferred method. Seven 2- to 6-minute videos (30 minutes total) were created on surgical attire, scrubbing, gowning and gloving, and maintaining sterility. The experimental group (n = 51) scored higher on the quiz compared with the control group (n = 63) (88% ± 1% versus 72% ± 1%; p < 0.0001). Students preferred the videos when it came to convenience, accessibility, efficiency, and review, and preferred the skill demonstration when it came to knowledge retention, preparedness, and ease of completion. Video education is superior to traditional skill demonstration in providing medical students with knowledge of SST. Students identified strengths to each method of teaching. Video education can augment medical students' knowledge prior to their operating room

  16. Effect of video-game experience and position of flight stick controller on simulated-flight performance.

    Science.gov (United States)

    Cho, Bo-Keun; Aghazadeh, Fereydoun; Al-Qaisi, Saif

    2012-01-01

    The purpose of this study was to determine the effects of video-game experience and flight-stick position on flying performance. The study divided participants into 2 groups; center- and side-stick groups, which were further divided into high and low level of video-game experience subgroups. The experiment consisted of 7 sessions of simulated flying, and in the last session, the flight stick controller was switched to the other position. Flight performance was measured in terms of the deviation of heading, altitude, and airspeed from their respective requirements. Participants with high experience in video games performed significantly better (p increase (0.78 %). However, after switching from a center- to a side-stick controller, performance scores decreased (4.8%).

  17. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  18. Stationary Stereo-Video Camera Stations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Accurate and precise stock assessments are predicated on accurate and precise estimates of life history parameters, abundance, and catch across the range of the...

  19. A randomized controlled study to evaluate the role of video-based coaching in training laparoscopic skills.

    Science.gov (United States)

    Singh, Pritam; Aggarwal, Rajesh; Tahir, Muaaz; Pucher, Philip H; Darzi, Ara

    2015-05-01

    This study evaluates whether video-based coaching can enhance laparoscopic surgical skills performance. Many professions utilize coaching to improve performance. The sports industry employs video analysis to maximize improvement from every performance. Laparoscopic novices were baseline tested and then trained on a validated virtual reality (VR) laparoscopic cholecystectomy (LC) curriculum. After competence, subjects were randomized on a 1:1 ratio and each performed 5 VRLCs. After each LC, intervention group subjects received video-based coaching by a surgeon, utilizing an adaptation of the GROW (Goals, Reality, Options, Wrap-up) coaching model. Control subjects viewed online surgical lectures. All subjects then performed 2 porcine LCs. Performance was assessed by blinded video review using validated global rating scales. Twenty subjects were recruited. No significant differences were observed between groups in baseline performance and in VRLC1. For each subsequent repetition, intervention subjects significantly outperformed controls on all global rating scales. Interventions outperformed controls in porcine LC1 [Global Operative Assessment of Laparoscopic Skills: (20.5 vs 15.5; P = 0.011), Objective Structured Assessment of Technical Skills: (21.5vs 14.5; P = 0.001), and Operative Performance Rating System: (26 vs 19.5; P = 0.001)] and porcine LC2 [Global Operative Assessment of Laparoscopic Skills: (28 vs 17.5; P = 0.005), Objective Structured Assessment of Technical Skills: (30 vs 16.5; P < 0.001), and Operative Performance Rating System: (36 vs 21; P = 0.004)]. Intervention subjects took significantly longer than controls in porcine LC1 (2920 vs 2004 seconds; P = 0.009) and LC2 (2297 vs 1683; P = 0.003). Despite equivalent exposure to practical laparoscopic skills training, video-based coaching enhanced the quality of laparoscopic surgical performance on both VR and porcine LCs, although at the expense of increased time. Video-based coaching is a feasible

  20. Assessment of Active Video Gaming Using Adapted Controllers by Individuals With Physical Disabilities: A Protocol.

    Science.gov (United States)

    Malone, Laurie A; Padalabalanarayanan, Sangeetha; McCroskey, Justin; Thirumalai, Mohanraj

    2017-06-16

    Individuals with disabilities are typically more sedentary and less fit compared to their peers without disabilities. Furthermore, engaging in physical activity can be extremely challenging due to physical impairments associated with disability and fewer opportunities to participate. One option for increasing physical activity is playing active video games (AVG), a category of video games that requires much more body movement for successful play than conventional push-button or joystick actions. However, many current AVGs are inaccessible or offer limited play options for individuals who are unable to stand, have balance issues, poor motor control, or cannot use their lower body to perform game activities. Making AVGs accessible to people with disabilities offers an innovative approach to overcoming various barriers to participation in physical activity. Our aim was to compare the effect of off-the-shelf and adapted game controllers on quality of game play, enjoyment, and energy expenditure during active video gaming in persons with physical disabilities, specifically those with mobility impairments (ie, unable to stand, balance issues, poor motor control, unable to use lower extremity for gameplay). The gaming controllers to be evaluated include off-the-shelf and adapted versions of the Wii Fit balance board and gaming mat. Participants (10-60 years old) came to the laboratory a total of three times. During the first visit, participants completed a functional assessment and became familiar with the equipment and games to be played. For the functional assessment, participants performed 18 functional movement tasks from the International Classification of Functioning, Disability, and Health. They also answered a series of questions from the Patient Reported Outcomes Measurement Information System and Quality of Life in Neurological Conditions measurement tools, to provide a personal perspective regarding their own functional ability. For Visit 2, metabolic data were

  1. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  2. Radioisotope camera

    International Nuclear Information System (INIS)

    Tausch, L.M.; Kump, R.J.

    1978-01-01

    The electronic ciruit corrects distortions caused by the distance between the individual photomultiplier tubes of the multiple radioisotope camera on one hand and between the tube configuration and the scintillator plate on the other. For this purpose the transmission characteristics of the nonlinear circuits are altered as a function of the energy of the incident radiation. By this means the threshold values between lower and higher amplification are adjusted to the energy level of each scintillation. The correcting circuit may be used for any number of isotopes to be measured. (DG) [de

  3. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  4. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  5. A Taxonomy of Asynchronous Instructional Video Styles

    Science.gov (United States)

    Chorianopoulos, Konstantinos

    2018-01-01

    Many educational organizations are employing instructional videos in their pedagogy, but there is a limited understanding of the possible video formats. In practice, the presentation format of instructional videos ranges from direct recording of classroom teaching with a stationary camera, or screencasts with voiceover, to highly elaborate video…

  6. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    less vertical motion. The exceptions are videos from the classes of biking (mainly due to the camera tracking fast bikers), jumping on a trampoline ...tracking the bikers; the jumping videos, featuring people on trampolines , the swing videos, which are usually recorded in profile view, and the walking

  7. Development of Automated Tracking System with Active Cameras for Figure Skating

    Science.gov (United States)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  8. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Science.gov (United States)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  9. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  10. Real-Time Acquisition of High Quality Face Sequences from an Active Pan-Tilt-Zoom Camera

    DEFF Research Database (Denmark)

    Haque, Mohammad A.; Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    -based real-time high-quality face image acquisition system, which utilizes pan-tilt-zoom parameters of a camera to focus on a human face in a scene and employs a face quality assessment method to log the best quality faces from the captured frames. The system consists of four modules: face detection, camera...... control, face tracking, and face quality assessment before logging. Experimental results show that the proposed system can effectively log the high quality faces from the active camera in real-time (an average of 61.74ms was spent per frame) with an accuracy of 85.27% compared to human annotated data.......Traditional still camera-based facial image acquisition systems in surveillance applications produce low quality face images. This is mainly due to the distance between the camera and subjects of interest. Furthermore, people in such videos usually move around, change their head poses, and facial...

  11. Correlation between two-dimensional video analysis and subjective assessment in evaluating knee control among elite female team handball players

    DEFF Research Database (Denmark)

    Stensrud, Silje; Myklebust, Grethe; Kristianslund, Eirik

    2011-01-01

    . The present study investigated the correlation between a two-dimensional (2D) video analysis and subjective assessment performed by one physiotherapist in evaluating knee control. We also tested the correlation between three simple clinical tests using both methods. A cohort of 186 female elite team handball...

  12. Teaching Parents about Responsive Feeding through a Vicarious Learning Video: A Pilot Randomized Controlled Trial

    Science.gov (United States)

    Ledoux, Tracey; Robinson, Jessica; Baranowski, Tom; O'Connor, Daniel P.

    2018-01-01

    The American Academy of Pediatrics and World Health Organization recommend responsive feeding (RF) to promote healthy eating behaviors in early childhood. This project developed and tested a vicarious learning video to teach parents RF practices. A RF vicarious learning video was developed using community-based participatory research methods.…

  13. Teaching parents about responsive feeding through a vicarious learning video: A pilot randomized controlled trial

    Science.gov (United States)

    The American Academy of Pediatrics and World Health Organization recommend responsive feeding (RF) to promote healthy eating behaviors in early childhood. This project developed and tested a vicarious learning video to teach parents RF practices. A RF vicarious learning video was developed using com...

  14. The effect of briefing videos in medical simulation-based education : a randomised controlled trial

    NARCIS (Netherlands)

    van Tetering, A.A.C.; Truijens, S.E.M.; van der Hout - van der Jagt, M.B.; Wijsman, J.L.P; Oei, S.G.

    2014-01-01

    The aim of this study is to compare the effects of an affective briefing video with a textual briefing on cognitive appraisal (threat or challenge response). It is hypothesized that briefing videos will cause a threat response, which is associated with increase in cortisol and memory consolidation4.

  15. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  16. A Motionless Camera

    Science.gov (United States)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  17. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Directory of Open Access Journals (Sweden)

    Parisot Christophe

    2003-01-01

    Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  18. Reaction time, impulsivity, and attention in hyperactive children and controls: a video game technique.

    Science.gov (United States)

    Mitchell, W G; Chavez, J M; Baker, S A; Guzman, B L; Azen, S P

    1990-07-01

    Maturation of sustained attention was studied in a group of 52 hyperactive elementary school children and 152 controls using a microcomputer-based test formatted to resemble a video game. In nonhyperactive children, both simple and complex reaction time decreased with age, as did variability of response time. Omission errors were extremely infrequent on simple reaction time and decreased with age on the more complex tasks. Commission errors had an inconsistent relationship with age. Hyperactive children were slower, more variable, and made more errors on all segments of the game than did controls. Both motor speed and calculated mental speed were slower in hyperactive children, with greater discrepancy for responses directed to the nondominant hand, suggesting that a selective right hemisphere deficit may be present in hyperactives. A summary score (number of individual game scores above the 95th percentile) of 4 or more detected 60% of hyperactive subjects with a false positive rate of 5%. Agreement with the Matching Familiar Figures Test was 75% in the hyperactive group.

  19. Randomized controlled trial of a video decision support tool for cardiopulmonary resuscitation decision making in advanced cancer.

    Science.gov (United States)

    Volandes, Angelo E; Paasche-Orlow, Michael K; Mitchell, Susan L; El-Jawahri, Areej; Davis, Aretha Delight; Barry, Michael J; Hartshorn, Kevan L; Jackson, Vicki Ann; Gillick, Muriel R; Walker-Corkery, Elizabeth S; Chang, Yuchiao; López, Lenny; Kemeny, Margaret; Bulone, Linda; Mann, Eileen; Misra, Sumi; Peachey, Matt; Abbo, Elmer D; Eichler, April F; Epstein, Andrew S; Noy, Ariela; Levin, Tomer T; Temel, Jennifer S

    2013-01-20

    Decision making regarding cardiopulmonary resuscitation (CPR) is challenging. This study examined the effect of a video decision support tool on CPR preferences among patients with advanced cancer. We performed a randomized controlled trial of 150 patients with advanced cancer from four oncology centers. Participants in the control arm (n = 80) listened to a verbal narrative describing CPR and the likelihood of successful resuscitation. Participants in the intervention arm (n = 70) listened to the identical narrative and viewed a 3-minute video depicting a patient on a ventilator and CPR being performed on a simulated patient. The primary outcome was participants' preference for or against CPR measured immediately after exposure to either modality. Secondary outcomes were participants' knowledge of CPR (score range of 0 to 4, with higher score indicating more knowledge) and comfort with video. The mean age of participants was 62 years (standard deviation, 11 years); 49% were women, 44% were African American or Latino, and 47% had lung or colon cancer. After the verbal narrative, in the control arm, 38 participants (48%) wanted CPR, 41 (51%) wanted no CPR, and one (1%) was uncertain. In contrast, in the intervention arm, 14 participants (20%) wanted CPR, 55 (79%) wanted no CPR, and 1 (1%) was uncertain (unadjusted odds ratio, 3.5; 95% CI, 1.7 to 7.2; P advanced cancer who viewed a video of CPR were less likely to opt for CPR than those who listened to a verbal narrative.

  20. Control quality and performance measurement of gamma cameras. S.F.P.M. report nr 28. Updating of S.F.P.H. reports Performance assessment and quality control of scintillation cameras: plane mode (1992), tomographic mode (1996) whole-body mode (1997)

    International Nuclear Information System (INIS)

    Petegnief, Yolande; Barrau, Corinne; Coulot, Jeremy; Guilhem, Marie Therese; Hapdey, Sebastien; Vrigneaud, Jean-Marc; Metayer, Yann; Picone, Magali; Ricard, Marcel; Salvat, Cecile; Bouchet, Francis; Ferrer, Ludovic; Murat, Caroline

    2012-01-01

    This report aims at providing students and professionals with a comprehensive guide related to quality control and to performance measurement on gamma cameras. It completes and updates three previous reports published by the SFPM during the 1990's related to the different acquisition modes for this modality of medical imagery: plane imagery, whole-body scanning, and tomography. The authors present the operation principle of scintillation cameras, the characteristics of a scintillation camera, analytic and algebraic algorithms of tomographic reconstruction, and the various software corrections applied in mono-photonic imagery (corrections of the attenuation effect, of the scattering effect, of the collimator response effect, and of the partial volume effect). In the next part, the present the various characteristics, parameters and issues related to performance measurement for the three addressed modes (plane, whole body, tomographic). The last part presents various aspects of the organisation of quality control and of performance follow-up: regulatory context, reference documents, internal quality control program

  1. The effectiveness of video interaction guidance in parents of premature infants: A multicenter randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Tooten Anneke

    2012-06-01

    Full Text Available Abstract Background Studies have consistently found a high incidence of neonatal medical problems, premature births and low birth weights in abused and neglected children. One of the explanations proposed for the relation between neonatal problems and adverse parenting is a possible delay or disturbance in the bonding process between the parent and infant. This hypothesis suggests that due to neonatal problems, the development of an affectionate bond between the parent and the infant is impeded. The disruption of an optimal parent-infant bond -on its turn- may predispose to distorted parent-infant interactions and thus facilitate abusive or neglectful behaviours. Video Interaction Guidance (VIG is expected to promote the bond between parents and newborns and is expected to diminish non-optimal parenting behaviour. Methods/design This study is a multi-center randomised controlled trial to evaluate the effectiveness of Video Interaction Guidance in parents of premature infants. In this study 210 newborn infants with their parents will be included: n = 70 healthy term infants (>37 weeks GA, n = 70 moderate term infants (32–37 weeks GA which are recruited from maternity wards of 6 general hospitals and n = 70 extremely preterm infants or very low birth weight infants (i.e. full term infants and their parents, receiving care as usual, a control group (i.e. premature infants and their parents, receiving care as usual and an intervention group (i.e. premature infants and their parents, receiving VIG. The data will be collected during the first six months after birth using observations of parent-infant interactions, questionnaires and semi-structured interviews. Primary outcomes are the quality of parental bonding and parent-infant interactive behaviour. Parental secondary outcomes are (posttraumatic stress symptoms, depression, anxiety and feelings of anger and hostility. Infant secondary outcomes are behavioral aspects such as crying

  2. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  3. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  4. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  5. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  6. Does playing a sports active video game improve object control skills of children with autism spectrum disorder?

    OpenAIRE

    Edwards, Jacqueline; Jeffrey, Sarah; May, Tamara; Rinehart, Nicole J.; Barnett, Lisa M.

    2017-01-01

    Background: Active video games (AVGs) encourage whole body movements to interact or control the gaming system, allowing the opportunity for skill development. Children with autism spectrum disorder (ASD) show decreased fundamental movement skills in comparison with their typically developing (TD) peers and might benefit from this approach. This pilot study investigates whether playing sports AVGs can increase the actual and perceived object control (OC) skills of 11 children with ASD aged 6–1...

  7. The Relationship Between Engagement and Neurophysiological Measures of Attention in Motion-Controlled Video Games: A Randomized Controlled Trial.

    Science.gov (United States)

    Leiker, Amber M; Miller, Matthew; Brewer, Lauren; Nelson, Monica; Siow, Maria; Lohse, Keith

    2016-04-21

    Video games and virtual environments continue to be the subject of research in health sciences for their capacity to augment practice through user engagement. Creating game mechanics that increase user engagement may have indirect benefits on learning (ie, engaged learners are likely to practice more) and may also have direct benefits on learning (ie, for a fixed amount of practice, engaged learners show superior retention of information or skills). To manipulate engagement through the aesthetic features of a motion-controlled video game and measure engagement's influence on learning. A group of 40 right-handed participants played the game under two different conditions (game condition or sterile condition). The mechanics of the game and the amount of practice were constant. During practice, event-related potentials (ERPs) to task-irrelevant probe tones were recorded during practice as an index of participants' attentional reserve. Participants returned for retention and transfer testing one week later. Although both groups improved in the task, there was no difference in the amount of learning between the game and sterile groups, countering previous research. A new finding was a statistically significant relationship between self-reported engagement and the amplitude of the early-P3a (eP3a) component of the ERP waveform, such that participants who reported higher levels of engagement showed a smaller eP3a (beta=-.08, P=.02). This finding provides physiological data showing that engagement elicits increased information processing (reducing attentional reserve), which yields new insight into engagement and its underlying neurophysiological properties. Future studies may objectively index engagement by quantifying ERPs (specifically the eP3a) to task-irrelevant probes.

  8. Short video interventions to reduce mental health stigma: a multi-centre randomised controlled trial in nursing high schools.

    Science.gov (United States)

    Winkler, Petr; Janoušková, Miroslava; Kožený, Jiří; Pasz, Jiří; Mladá, Karolína; Weissová, Aneta; Tušková, Eva; Evans-Lacko, Sara

    2017-12-01

    We aimed to assess whether short video interventions could reduce stigma among nursing students. A multi-centre, randomised controlled trial was conducted. Participating schools were randomly selected and randomly assigned to receive: (1) an informational leaflet, (2) a short video intervention or (3) a seminar involving direct contact with a service user. The Community Attitudes towards Mental Illness (CAMI) and Reported and Intended Behaviour Scale (RIBS) were selected as primary outcome measures. SPANOVA models were built and Cohen's d calculated to assess the overall effects in each of the trial arms. Compared to the baseline, effect sizes immediately after the intervention were small in the flyer arm (CAMI: d = 0.25; RIBS: d = 0.07), medium in the seminar arm (CAMI: d = 0.61; RIBS: d = 0.58), and medium in the video arm (CAMI: d = 0.49 RIBS: d = 0.26; n = 237). Effect sizes at the follow-up were vanishing in the flyer arm (CAMI: d = 0.05; RIBS: d = 0.04), medium in the seminar arm (CAMI: d = 0.43; RIBS: d = 0.26; n = 254), and small in the video arm (CAMI: d = 0.22 RIBS: d = 0.21; n = 237). Seminar had the strongest and relatively stable effect on students' attitudes and intended behaviour, but the effect of short video interventions was also considerable and stable over time. Since short effective video interventions are relatively cheap, conveniently accessible and easy to disseminate globally, we recommend them for further research and development.

  9. The effect of student self-video of performance on clinical skill competency: a randomised controlled trial.

    Science.gov (United States)

    Maloney, Stephen; Storr, Michael; Morgan, Prue; Ilic, Dragan

    2013-03-01

    Emerging technologies and student information technology literacy are enabling new methods of teaching and learning for clinical skill performance. Facilitating experiential practice and reflection on performance through student self-video, and exposure to peer benchmarks, may promote greater levels of skill competency. This study examines the impact of student self-video on the attainment of clinical skills. A total of 60 Physiotherapy students (100%) consented to participate in the randomised controlled trial. One group (50%) was taught a complex clinical skill with regular practical tutoring, whilst the other group (50%) supplemented the tutoring with a self-video task aimed at promoting reflection on performance. Student skill performance was measured in an objective structured clinical examination (OSCE). Students also completed an anonymous questionnaire, which explored their perception of their learning experiences. Students received significantly higher scores in the OSCE when the examined clinical skill had been supplemented with a self-video of performance task (P = 0.048). Descriptive analysis of the questionnaires relating to student perceptions on the teaching methods identified that the self-video of performance task utilised contributed to improvement in their clinical performance and their confidence for future clinical practice. Students identified a number of aspects of the submission process that contributed to this perception of educational value. The novel results of this study demonstrate that greater clinical skill competency is achieved when traditional tutoring methods are supplemented with student self-video of performance tasks. Additional benefits included the ability of staff and students to monitor longitudinal performance, and an increase in feedback opportunities.

  10. Digital video recording and archiving in ophthalmic surgery

    Directory of Open Access Journals (Sweden)

    Raju Biju

    2006-01-01

    Full Text Available Currently most ophthalmic operating rooms are equipped with an analog video recording system [analog Charge Couple Device camera for video grabbing and a Video Cassette Recorder for recording]. We discuss the various advantages of a digital video capture device, its archiving capabilities and our experience during the transition from analog to digital video recording and archiving. The basic terminology and concepts related to analog and digital video, along with the choice of hardware, software and formats for archiving are discussed.

  11. Ground Validation Drop Camera Transect Points - St. Thomas/ St. John USVI - 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video were collected between...

  12. Advanced real-time manipulation of video streams

    CERN Document Server

    Herling, Jan

    2014-01-01

    Diminished Reality is a new fascinating technology that removes real-world content from live video streams. This sensational live video manipulation actually removes real objects and generates a coherent video stream in real-time. Viewers cannot detect modified content. Existing approaches are restricted to moving objects and static or almost static cameras and do not allow real-time manipulation of video content. Jan Herling presents a new and innovative approach for real-time object removal with arbitrary camera movements.

  13. 75 FR 68379 - In the Matter of: Certain Video Game Systems and Controllers; Notice of Investigation

    Science.gov (United States)

    2010-11-05

    ...Notice is hereby given that a complaint was filed with the U.S. International Trade Commission on October 1, 2010, under section 337 of the Tariff Act of 1930, as amended, 19 U.S.C. 1337, on behalf of Motiva, LLC. of Dublin, Ohio. Motiva filed letters supplementing the Complaint on October 18 and 22, 2010. The complaint alleges violations of section 337 based upon the importation into the United States, the sale for importation, and the sale within the United States after importation of certain video game systems and controllers by reason of infringement of certain claims of U.S. Patent No. 7,292,151 (``the '151 patent'') and U.S. Patent No. 7,492,268 (``the '268 patent''). The complaint further alleges that an industry in the United States exists or in the process of being established as required by subsection (a)(2) of section 337. The complainant requests that the Commission institute an investigation and, after the investigation, issue an exclusion order and a cease and desist order.

  14. SMART VIDEO SURVEILLANCE SYSTEM FOR VEHICLE DETECTION AND TRAFFIC FLOW CONTROL

    Directory of Open Access Journals (Sweden)

    A. A. SHAFIE

    2011-08-01

    Full Text Available Traffic signal light can be optimized using vehicle flow statistics obtained by Smart Video Surveillance Software (SVSS. This research focuses on efficient traffic control system by detecting and counting the vehicle numbers at various times and locations. At present, one of the biggest problems in the main city in any country is the traffic jam during office hour and office break hour. Sometimes it can be seen that the traffic signal green light is still ON even though there is no vehicle coming. Similarly, it is also observed that long queues of vehicles are waiting even though the road is empty due to traffic signal light selection without proper investigation on vehicle flow. This can be handled by adjusting the vehicle passing time implementing by our developed SVSS. A number of experiment results of vehicle flows are discussed in this research graphically in order to test the feasibility of the developed system. Finally, adoptive background model is proposed in SVSS in order to successfully detect target objects such as motor bike, car, bus, etc.

  15. Using game theory for perceptual tuned rate control algorithm in video coding

    Science.gov (United States)

    Luo, Jiancong; Ahmad, Ishfaq

    2005-03-01

    This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

  16. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    Science.gov (United States)

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  17. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees

    OpenAIRE

    Ramos Giraldo, Paula Jimena; Guerrero Aguirre, ?lvaro; Mu?oz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio

    2017-01-01

    Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i)...

  18. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  19. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  20. Gamma camera

    International Nuclear Information System (INIS)

    Conrad, B.; Heinzelmann, K.G.

    1975-01-01

    A gamma camera is described which obviates the distortion of locating signals generally caused by the varied light conductive capacities of the light conductors in that the flow of light through each light conductor may be varied by means of a shutter. A balancing of the flow of light through each of the individual light conductors, in effect, collective light conductors may be balanced on the basis of their light conductive capacities or properties, so as to preclude a distortion of the locating signals caused by the varied light conductive properties of the light conductors. Each light conductor has associated therewith two, relative to each other, independently adjustable shutters, of which one forms a closure member and the other an adjusting shutter. In this embodiment of the invention it is thus possible to block all of the light conductors leading to a photoelectric transducer, with the exception of those light conductors which are to be balanced. The balancing of the individual light conductors may then be obtained on the basis of the output signals of the photoelectric transducer. (auth)

  1. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  2. Trading Shovels for Controllers: A Brief Exploration of the Portrayal of Archaeology in Video Games

    OpenAIRE

    Reinhard, Andrew; Meyers Emery, Kathryn

    2016-01-01

    Archaeology has been a persistent theme for video games, from the long-running Indiana Jones and Lara Croft franchises to more recent uses of archaeology in games like Destiny and World of Warcraft. In these games, archaeology is often portrayed as a search for treasure among lost worlds that leads to looting and the destruction of cultural heritage. In this article, we review the current state of archaeological video games, including mainstream and educational games. While this is not an exh...

  3. Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems

    Science.gov (United States)

    Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.

    2011-01-01

    The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.

  4. What Counts as Educational Video?: Working toward Best Practice Alignment between Video Production Approaches and Outcomes

    Science.gov (United States)

    Winslett, Greg

    2014-01-01

    The twenty years since the first digital video camera was made commercially available has seen significant increases in the use of low-cost, amateur video productions for teaching and learning. In the same period, production and consumption of professionally produced video has also increased, as has the distribution platforms to access it.…

  5. Geometric database maintenance using CCTV cameras and overlay graphics

    Science.gov (United States)

    Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin

    1988-01-01

    An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.

  6. A brief report on the relationship between self-control, video game addiction and academic achievement in normal and ADHD students.

    Science.gov (United States)

    Haghbin, Maryam; Shaterian, Fatemeh; Hosseinzadeh, Davood; Griffiths, Mark D

    2013-12-01

    Over the last two decades, research into video game addiction has grown increasingly. The present research aimed to examine the relationship between video game addiction, self-control, and academic achievement of normal and ADHD high school students. Based on previous research it was hypothesized that (i) there would be a relationship between video game addiction, self-control and academic achievement (ii) video game addiction, self-control and academic achievement would differ between male and female students, and (iii) the relationship between video game addiction, self-control and academic achievement would differ between normal students and ADHD students. The research population comprised first grade high school students of Khomeini-Shahr (a city in the central part of Iran). From this population, a sample group of 339 students participated in the study. The survey included the Game Addiction Scale (Lemmens, Valkenburg & Peter, 2009), the Self-Control Scale (Tangney, Baumeister & Boone, 2004) and the ADHD Diagnostic checklist (Kessler et al., 2007). In addition to questions relating to basic demographic information, students' Grade Point Average (GPA) for two terms was used for measuring their academic achievement. These hypotheses were examined using a regression analysis. Among Iranian students, the relationship between video game addiction, self-control, and academic achievement differed between male and female students. However, the relationship between video game addiction, self-control, academic achievement, and type of student was not statistically significant. Although the results cannot demonstrate a causal relationship between video game use, video game addiction, and academic achievement, they suggest that high involvement in playing video games leaves less time for engaging in academic work.

  7. An alternate way for image documentation in gamma camera processing units

    International Nuclear Information System (INIS)

    Schneider, P.

    1980-01-01

    For documentation of images and curves generated by a gamma camera processing system a film exposure tool from a CT system was linked to the video monitor by use of a resistance bridge. The machine has a stock capacity of 100 plane films. For advantage there is no need for an interface, the complete information on the monitor is transferred to the plane film and compared to software controlled data output on printer or plotter the device is tremendously time saving. (orig.) [de

  8. Effects of video-based, online education on behavioral and knowledge outcomes in sunscreen use: a randomized controlled trial.

    Science.gov (United States)

    Armstrong, April W; Idriss, Nayla Z; Kim, Randie H

    2011-05-01

    To compare online video and pamphlet education at improving patient comprehension and adherence to sunscreen use, and to assess patient satisfaction with the two educational approaches. In a randomized controlled trial, 94 participants received either online, video-based education or pamphlet-based education that described the importance and proper use of sunscreen. Sun protective knowledge and sunscreen application behaviors were assessed at baseline and 12 weeks after group-specific intervention. Participants in both groups had similar levels of baseline sunscreen knowledge. Post-study analysis revealed significantly greater improvement in the knowledge scores from video group members compared to the pamphlet group (p=0.003). More importantly, video group participants reported greater sunscreen adherence (peducation vehicle more useful and appealing than the pamphlet group (peducational tool for teaching sun protective knowledge and encouraging sunscreen use than written materials. More effective patient educational methods to encourage sun protection activities, such as regular sunscreen use, have the potential to increase awareness and foster positive, preventative health behaviors against skin cancers. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  9. Assessing stimulus control and promoting generalization via video modeling when teaching social responses to children with autism.

    Science.gov (United States)

    Jones, JoAnna; Lerman, Dorothea C; Lechago, Sarah

    2014-01-01

    We taught social responses to young children with autism using an adult as the recipient of the social interaction and then assessed generalization of performance to adults and peers who had not participated in the training. Although the participants' performance was similar across adults, responding was less consistent with peers, and a subsequent probe suggested that the recipient of the social behavior (adults vs. peers) controlled responding. We then evaluated the effects of having participants observe a video of a peer engaged in the targeted social behavior with another peer who provided reinforcement for the social response. Results suggested that certain irrelevant stimuli (adult vs. peer recipient) were more likely to exert stimulus control over responding than others (setting, materials) and that video viewing was an efficient way to promote generalization to peers. © Society for the Experimental Analysis of Behavior.

  10. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  11. A positioning system for forest diseases and pests based on GIS and PTZ camera

    International Nuclear Information System (INIS)

    Wang, Z B; Zhao, F F; Wang, C B; Wang, L L

    2014-01-01

    Forest diseases and pests cause enormous economic losses and ecological damage every year in China. To prevent and control forest diseases and pests, the key is to get accurate information timely. In order to improve monitoring coverage rate and economize on manpower, a cooperative investigation model for forest diseases and pests is put forward. It is composed of video positioning system and manual labor reconnaissance with mobile GIS embedded in PDA. Video system is used to scan the disaster area, and is particularly effective on where trees are withered. Forest diseases prevention and control workers can check disaster area with PDA system. To support this investigation model, we developed a positioning algorithm and a positioning system. The positioning algorithm is based on DEM and PTZ camera. Moreover, the algorithm accuracy is validated. The software consists of 3D GIS subsystem, 2D GIS subsystem, video control subsystem and disaster positioning subsystem. 3D GIS subsystem makes positioning visual, and practically easy to operate. 2D GIS subsystem can output disaster thematic map. Video control subsystem can change Pan/Tilt/Zoom of a digital camera remotely, to focus on the suspected area. Disaster positioning subsystem implements the positioning algorithm. It is proved that the positioning system can observe forest diseases and pests in practical application for forest departments

  12. 77 FR 58577 - Certain Video Game Systems and Wireless Controllers and Components Thereof; Notice of Request for...

    Science.gov (United States)

    2012-09-21

    ...Notice is hereby given that the presiding administrative law judge has issued a Final Initial Determination and Recommended Determination on Remedy and Bonding in the above-captioned investigation. The Commission is soliciting comments on public interest issues raised by the recommended relief, specifically a limited exclusion order and a cease and desist order against certain video game systems and wireless controllers and components thereof, imported by respondent Nintendo Co., Ltd., of Kyoto, Japan and Nintendo America, Inc. of Redmond, Washington (collectively, ``Nintendo'').

  13. Video- or text-based e-learning when teaching clinical procedures? A randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Buch SV

    2014-08-01

    Full Text Available Steen Vigh Buch,1 Frederik Philip Treschow,2 Jesper Brink Svendsen,3 Bjarne Skjødt Worm4 1Department of Vascular Surgery, Rigshospitalet, Copenhagen, Denmark; 2Department of Anesthesia and Intensive Care, Herlev Hospital, Copenhagen, Denmark; 3Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark; 4Department of Anesthesia and Intensive Care, Bispebjerg Hospital, Copenhagen, Denmark Background and aims: This study investigated the effectiveness of two different levels of e-learning when teaching clinical skills to medical students. Materials and methods: Sixty medical students were included and randomized into two comparable groups. The groups were given either a video- or text/picture-based e-learning module and subsequently underwent both theoretical and practical examination. A follow-up test was performed 1 month later. Results: The students in the video group performed better than the illustrated text-based group in the practical examination, both in the primary test (P<0.001 and in the follow-up test (P<0.01. Regarding theoretical knowledge, no differences were found between the groups on the primary test, though the video group performed better on the follow-up test (P=0.04. Conclusion: Video-based e-learning is superior to illustrated text-based e-learning when teaching certain practical clinical skills. Keywords: e-learning, video versus text, medicine, clinical skills

  14. Cupping for treating neck pain in video display terminal (VDT) users: a randomized controlled pilot trial.

    Science.gov (United States)

    Kim, Tae-Hun; Kang, Jung Won; Kim, Kun Hyung; Lee, Min Hee; Kim, Jung Eun; Kim, Joo-Hee; Lee, Seunghoon; Shin, Mi-Suk; Jung, So-Young; Kim, Ae-Ran; Park, Hyo-Ju; Hong, Kwon Eui

    2012-01-01

    This was a randomized controlled pilot trial to evaluate the effectiveness of cupping therapy for neck pain in video display terminal (VDT) workers. Forty VDT workers with moderate to severe neck pain were recruited from May, 2011 to February, 2012. Participants were randomly allocated into one of the two interventions: 6 sessions of wet and dry cupping or heating pad application. The participants were offered an exercise program to perform during the participation period. A 0 to 100 numeric rating scale (NRS) for neck pain, measure yourself medical outcome profile 2 score (MYMOP2 score), cervical spine range of motion (C-spine ROM), neck disability index (NDI), the EuroQol health index (EQ-5D), short form stress response inventory (SRI-SF) and fatigue severity scale (FSS) were assessed at several points during a 7-week period. Compared with a heating pad, cupping was more effective in improving pain (adjusted NRS difference: -1.29 [95% CI -1.61, -0.97] at 3 weeks (p=0.025) and -1.16 [-1.48, -0.84] at 7 weeks (p=0.005)), neck function (adjusted NDI difference: -0.79 [-1.11, -0.47] at 3 (p=0.0039) and 7 weeks (pcupping and 0.91 [0.86, 0.91] with heating pad treatment, p=0.0054). Four participants reported mild adverse events of cupping. Two weeks of cupping therapy and an exercise program may be effective in reducing pain and improving neck function in VDT workers.

  15. Reactions to a remote-controlled video-communication robot in seniors' homes: a pilot study of feasibility and acceptance.

    Science.gov (United States)

    Seelye, Adriana M; Wild, Katherine V; Larimer, Nicole; Maxwell, Shoshana; Kearns, Peter; Kaye, Jeffrey A

    2012-12-01

    Remote telepresence provided by tele-operated robotics represents a new means for obtaining important health information, improving older adults' social and daily functioning and providing peace of mind to family members and caregivers who live remotely. In this study we tested the feasibility of use and acceptance of a remotely controlled robot with video-communication capability in independently living, cognitively intact older adults. A mobile remotely controlled robot with video-communication ability was placed in the homes of eight seniors. The attitudes and preferences of these volunteers and those of family or friends who communicated with them remotely via the device were assessed through survey instruments. Overall experiences were consistently positive, with the exception of one user who subsequently progressed to a diagnosis of mild cognitive impairment. Responses from our participants indicated that in general they appreciated the potential of this technology to enhance their physical health and well-being, social connectedness, and ability to live independently at home. Remote users, who were friends or adult children of the participants, were more likely to test the mobility features and had several suggestions for additional useful applications. Results from the present study showed that a small sample of independently living, cognitively intact older adults and their remote collaterals responded positively to a remote controlled robot with video-communication capabilities. Research is needed to further explore the feasibility and acceptance of this type of technology with a variety of patients and their care contacts.

  16. Video-based peer feedback through social networking for robotic surgery simulation: a multicenter randomized controlled trial.

    Science.gov (United States)

    Carter, Stacey C; Chiang, Alexander; Shah, Galaxy; Kwan, Lorna; Montgomery, Jeffrey S; Karam, Amer; Tarnay, Christopher; Guru, Khurshid A; Hu, Jim C

    2015-05-01

    To examine the feasibility and outcomes of video-based peer feedback through social networking to facilitate robotic surgical skill acquisition. The acquisition of surgical skills may be challenging for novel techniques and/or those with prolonged learning curves. Randomized controlled trial involving 41 resident physicians performing the Tubes (Da Vinci Intuitive Surgical, Sunnyvale, CA) simulator exercise with versus without peer feedback of video-recorded performance through a social networking Web page. Data collected included simulator exercise score, time to completion, and comfort and satisfaction with robotic surgery simulation. There were no baseline differences between the intervention group (n = 20) and controls (n = 21). The intervention group showed improvement in mean scores from session 1 to sessions 2 and 3 (60.7 vs 75.5, P feedback subjects were more comfortable with robotic surgery than controls (90% vs 62%, P = 0.021) and expressed greater satisfaction with the learning experience (100% vs 67%, P = 0.014). Of the intervention subjects, 85% found that peer feedback was useful and 100% found it effective. Video-based peer feedback through social networking appears to be an effective paradigm for surgical education and accelerates the robotic surgery learning curve during simulation.

  17. IndigoVision IP video keeps watch over remote gas facilities in Amazon rainforest

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2010-07-15

    In Brazil, IndigoVision's complete IP video security technology is being used to remotely monitor automated gas facilities in the Amazon rainforest. Twelve compounds containing millions of dollars of process automation, telemetry, and telecom equipment are spread across many thousands of miles of forest and centrally monitored in Rio de Janeiro using Control Center, the company's Security Management software. The security surveillance project uses a hybrid IP network comprising satellite, fibre optic, and wireless links. In addition to advanced compression technology and bandwidth tuning tools, the IP video system uses Activity Controlled Framerate (ACF), which controls the frame rate of the camera video stream based on the amount of motion in a scene. In the absence of activity, the video is streamed at a minimum framerate, but the moment activity is detected the framerate jumps to the configured maximum. This significantly reduces the amount of bandwidth needed. At each remote facility, fixed analog cameras are connected to transmitter nodules that convert the feed to high-quality digital video for transmission over the IP network. The system also integrates alarms with video surveillance. PIR intruder detectors are connected to the system via digital inputs on the transmitters. Advanced alarm-handling features in the Control Center software process the PIR detector alarms and alert operators to potential intrusions. This improves operator efficiency and incident response. 1 fig.

  18. Using a laser scanning camera for reactor inspection

    International Nuclear Information System (INIS)

    Armour, I.A.; Adrain, R.S.; Klewe, R.C.

    1984-01-01

    Inspection of nuclear reactors is normally carried out using TV or film cameras. There are, however, several areas where these cameras show considerable shortcomings. To overcome these difficulties, laser scanning cameras have been developed. This type of camera can be used for general visual inspection as well as the provision of high resolution video images with high ratio on and off-axis zoom capability. In this paper, we outline the construction and operation of a laser scanning camera and give examples of how it has been used in various power stations, and indicate future potential developments. (author)

  19. FPS camera sync and reset chassis

    International Nuclear Information System (INIS)

    Yates, G.J.

    1980-06-01

    The sync and reset chassis provides all the circuitry required to synchronize an event to be studied, a remote free-running focus projection and scanning (FPS) data-acquisition TV camera, and a video signal recording system. The functions, design, and operation of this chassis are described in detail

  20. Quality Variation Control for Three-Dimensional Wavelet-Based Video Coders

    Directory of Open Access Journals (Sweden)

    Vidhya Seran

    2007-02-01

    Full Text Available The fluctuation of quality in time is a problem that exists in motion-compensated-temporal-filtering (MCTF- based video coding. The goal of this paper is to design a solution for overcoming the distortion fluctuation challenges faced by wavelet-based video coders. We propose a new technique for determining the number of bits to be allocated to each temporal subband in order to minimize the fluctuation in the quality of the reconstructed video. Also, the wavelet filter properties are explored to design suitable scaling coefficients with the objective of smoothening the temporal PSNR. The biorthogonal 5/3 wavelet filter is considered in this paper and experimental results are presented for 2D+t and t+2D MCTF wavelet coders.

  1. Quality Variation Control for Three-Dimensional Wavelet-Based Video Coders

    Directory of Open Access Journals (Sweden)

    Seran Vidhya

    2007-01-01

    Full Text Available The fluctuation of quality in time is a problem that exists in motion-compensated-temporal-filtering (MCTF- based video coding. The goal of this paper is to design a solution for overcoming the distortion fluctuation challenges faced by wavelet-based video coders. We propose a new technique for determining the number of bits to be allocated to each temporal subband in order to minimize the fluctuation in the quality of the reconstructed video. Also, the wavelet filter properties are explored to design suitable scaling coefficients with the objective of smoothening the temporal PSNR. The biorthogonal 5/3 wavelet filter is considered in this paper and experimental results are presented for 2D+t and t+2D MCTF wavelet coders.

  2. Video- or text-based e-learning when teaching clinical procedures? A randomized controlled trial.

    Science.gov (United States)

    Buch, Steen Vigh; Treschow, Frederik Philip; Svendsen, Jesper Brink; Worm, Bjarne Skjødt

    2014-01-01

    This study investigated the effectiveness of two different levels of e-learning when teaching clinical skills to medical students. Sixty medical students were included and randomized into two comparable groups. The groups were given either a video- or text/picture-based e-learning module and subsequently underwent both theoretical and practical examination. A follow-up test was performed 1 month later. The students in the video group performed better than the illustrated text-based group in the practical examination, both in the primary test (Pvideo group performed better on the follow-up test (P=0.04). Video-based e-learning is superior to illustrated text-based e-learning when teaching certain practical clinical skills.

  3. Enhancing Security and Privacy in Video Surveillance through Role-Oriented Access Control Mechanism

    DEFF Research Database (Denmark)

    Mahmood Rajpoot, Qasim

    sensitive regions, e.g. faces, from the videos. However, very few research efforts have focused on addressing the security aspects of video surveillance data and on authorizing access to this data. Interestingly, while PETs help protect the privacy of individuals, they may also hinder the usefulness....... Pervasive usage of such systems gives substantial powers to those monitoring the videos and poses a threat to the privacy of anyone observed by the system. Aside from protecting privacy from the outside attackers, it is equally important to protect the privacy of individuals from the inside personnel...... involved in monitoring surveillance data to minimize the chances of misuse of the system, e.g. voyeurism. In this context, several techniques to protect the privacy of individuals, called privacy enhancing techniques (PET) have therefore been proposed in the literature which detect and mask the privacy...

  4. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  5. The modular integrated video system (MIVS)

    International Nuclear Information System (INIS)

    Schneider, S.L.; Sonnier, C.S.

    1987-01-01

    The Modular Integrated Video System (MIVS) is being developed for the International Atomic Energy Agency (IAEA) for use in facilities where mains power is available and the separation of the Camera and Recording Control Unit is desirable. The system is being developed under the US Program for Technical Assistance to the IAEA Safeguards (POTAS). The MIVS is designed to be a user-friendly system, allowing operation with minimal effort and training. The system software, through the use of a Liquid Crystal Display (LCD) and four soft keys, leads the inspector through the setup procedures to accomplish the intended surveillance or maintenance task. Review of surveillance data is accomplished with the use of a Portable Review Station. This Review Station will aid the inspector in the review process and determine the number of missed video scenes during a surveillance period

  6. The Future of Video

    OpenAIRE

    Li, F.

    2016-01-01

    Executive Summary \\ud \\ud A range of technological innovations (e.g. smart phones and digital cameras), infrastructural advances (e.g. broadband and 3G/4G wireless networks) and platform developments (e.g. YouTube, Facebook, Snapchat, Instagram, Amazon, and Netflix) are collectively transforming the way video is produced, distributed, consumed, archived – and importantly, monetised. Changes have been observed well beyond the mainstream TV and film industries, and these changes are increasingl...

  7. Development of camera technology for monitoring nests. Chapter 15

    Science.gov (United States)

    W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson

    2012-01-01

    Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...

  8. Vehicular camera pedestrian detection research

    Science.gov (United States)

    Liu, Jiahui

    2018-03-01

    With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.

  9. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  10. Increasing Physical Activity in Mothers Using Video Exercise Groups and Exercise Mobile Apps: Randomized Controlled Trial.

    Science.gov (United States)

    Mascarenhas, Maya Nina; Chan, June Maylin; Vittinghoff, Eric; Van Blarigan, Erin Lynn; Hecht, Frederick

    2018-05-18

    Women significantly decrease their activity levels in the transition to motherhood. Digital health technologies are low cost, scalable, and can provide an effective delivery mechanism for behavior change. This is the first study that examines the use of videoconferencing and mobile apps to create exercise groups for mothers. The aim of the study was to test the feasibility, acceptability, and effectiveness of an individually adaptive and socially supportive physical activity intervention incorporating videoconferencing and mobile apps for mothers. The Moms Online Video Exercise Study was an 8-week, 2-armed, Web-based randomized trial comparing the effectiveness of a group exercise intervention with a waitlist control. Healthy mothers with at least 1 child under the age of 12 years were recruited through Facebook and email listservs. Intervention participants joined exercise groups using videoconferencing (Google Hangouts) every morning on weekdays and exercised together in real time, guided by exercise mobile apps (eg, Nike+, Sworkit) of their choice. Waitlist control participants had access to recommended mobile apps and an invitation to join an exercise group after the 8-week study period. Main outcomes assessed included changes in self-reported moderate, vigorous, and moderate to vigorous physical activity (MVPA) minutes per week in aggregate and stratified by whether women met Centers for Disease Control and Prevention guidelines for sufficient aerobic activity at baseline. Outcomes were measured through self-assessed Web-based questionnaires at baseline and 8 weeks. The intervention was effective at increasing exercise for inactive women and proved to be feasible and acceptable to all participants. A total of 64 women were randomized, 30 to intervention and 34 to control. Women attended 2.8 sessions per week. There was a strong, but not statistically significant, trend toward increasing moderate, vigorous, and MVPA minutes for all women. As hypothesized, in

  11. An open-source, FireWire camera-based, Labview-controlled image acquisition system for automated, dynamic pupillometry and blink detection.

    Science.gov (United States)

    de Souza, John Kennedy Schettino; Pinto, Marcos Antonio da Silva; Vieira, Pedro Gabrielle; Baron, Jerome; Tierra-Criollo, Carlos Julio

    2013-12-01

    The dynamic, accurate measurement of pupil size is extremely valuable for studying a large number of neuronal functions and dysfunctions. Despite tremendous and well-documented progress in image processing techniques for estimating pupil parameters, comparatively little work has been reported on practical hardware issues involved in designing image acquisition systems for pupil analysis. Here, we describe and validate the basic features of such a system which is based on a relatively compact, off-the-shelf, low-cost FireWire digital camera. We successfully implemented two configurable modes of video record: a continuous mode and an event-triggered mode. The interoperability of the whole system is guaranteed by a set of modular software components hosted on a personal computer and written in Labview. An offline analysis suite of image processing algorithms for automatically estimating pupillary and eyelid parameters were assessed using data obtained in human subjects. Our benchmark results show that such measurements can be done in a temporally precise way at a sampling frequency of up to 120 Hz and with an estimated maximum spatial resolution of 0.03 mm. Our software is made available free of charge to the scientific community, allowing end users to either use the software as is or modify it to suit their own needs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  13. The effect of active video games by ethnicity, sex and fitness: subgroup analysis from a randomised controlled trial.

    Science.gov (United States)

    Foley, Louise; Jiang, Yannan; Ni Mhurchu, Cliona; Jull, Andrew; Prapavessis, Harry; Rodgers, Anthony; Maddison, Ralph

    2014-04-03

    The prevention and treatment of childhood obesity is a key public health challenge. However, certain groups within populations have markedly different risk profiles for obesity and related health behaviours. Well-designed subgroup analysis can identify potential differential effects of obesity interventions, which may be important for reducing health inequalities. The study aim was to evaluate the consistency of the effects of active video games across important subgroups in a randomised controlled trial (RCT). A two-arm, parallel RCT was conducted in overweight or obese children (n=322; aged 10-14 years) to determine the effect of active video games on body composition. Statistically significant overall treatment effects favouring the intervention group were found for body mass index, body mass index z-score and percentage body fat at 24 weeks. For these outcomes, pre-specified subgroup analyses were conducted among important baseline demographic (ethnicity, sex) and prognostic (cardiovascular fitness) groups. No statistically significant interaction effects were found between the treatment and subgroup terms in the main regression model (p=0.36 to 0.93), indicating a consistent treatment effect across these groups. Preliminary evidence suggests an active video games intervention had a consistent positive effect on body composition among important subgroups. This may support the use of these games as a pragmatic public health intervention to displace sedentary behaviour with physical activity in young people.

  14. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  15. Evaluating the efficacy of a landscape scale feral cat control program using camera traps and occupancy models.

    Science.gov (United States)

    Comer, Sarah; Speldewinde, Peter; Tiller, Cameron; Clausen, Lucy; Pinder, Jeff; Cowen, Saul; Algar, Dave

    2018-03-28

    The impact of introduced predators is a major factor limiting survivorship and recruitment of many native Australian species. In particular, the feral cat and red fox have been implicated in range reductions and population declines of many conservation dependent species across Australia, including ground-nesting birds and small to medium-sized mammals. The impact of predation by feral cats since their introduction some 200 years ago has altered the structure of native fauna communities and led to the development of landscape-scale threat abatement via baiting programs with the feral cat bait, Eradicat. Demonstrating the effectiveness of broad-scale programs is essential for managers to fine tune delivery and timing of baiting. Efficacy of feral cat baiting at the Fortescue Marsh in the Pilbara, Western Australia was tested using camera traps and occupancy models. There was a significant decrease in probability of site occupancy in baited sites in each of the five years of this study, demonstrating both the effectiveness of aerial baiting for landscape-scale removal of feral cats, and the validity of camera trap monitoring techniques for detecting changes in feral cat occupancy during a five-year baiting program.

  16. EDICAM fast video diagnostic installation on the COMPASS tokamak

    International Nuclear Information System (INIS)

    Szappanos, A.; Berta, M.; Hron, M.; Panek, R.; Stoeckel, J.; Tulipan, S.; Veres, G.; Weinzettl, V.; Zoletnik, S.

    2010-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed by the Hungarian Association and has been installed on the COMPASS tokamak in the Institute of Plasma Physics AS CR in Prague, during February 2009. The standalone system contains a data acquisition PC and a prototype sensor module of EDICAM. Appropriate optical system have been designed and adjusted for the local requirements, and a mechanical holder keeps the camera out of the magnetic field. The fast camera contains a monochrome CMOS sensor with advanced control features and spectral sensitivity in the visible range. A special web based control interface has been implemented using Java spring framework to provide the control features in a graphical user environment. Java native interface (JNI) is used to reach the driver functions and to collect the data stored by direct memory access (DMA). Using a built in real-time streaming server one can see the live video from the camera through any web browser in the intranet. The live video is distributed in a Motion Jpeg format using real-time streaming protocol (RTSP) and a Java applet have been written to show the movie on the client side. The control system contains basic image processing features and the 3D wireframe of the tokamak can be projected to the selected frames. A MatLab interface is also presented with advanced post processing and analysis features to make the raw data available for high level computing programs. In this contribution all the concepts of EDICAM control center and the functions of the distinct software modules are described.

  17. Radiation-resistant optical sensors and cameras; Strahlungsresistente optische Sensoren und Kameras

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, G. [Imaging and Sensing Technology, Bonn (Germany)

    2008-02-15

    Introducing video technology, i.e. 'TV', specifically in the nuclear field was considered at an early stage. Possibilities to view spaces in nuclear facilities by means of radiation-resistant optical sensors or cameras are presented. These systems are to enable operators to monitor and control visually the processes occurring within such spaces. Camera systems are used, e.g., for remote surveillance of critical components in nuclear power plants and nuclear facilities, and thus contribute also to plant safety. A different application of optical systems resistant to radiation is in the visual inspection of, e.g., reactor pressure vessels and in tracing small parts inside a reactor. Camera systems are also employed in remote disassembly of radioactively contaminated old plants. Unfortunately, the niche market of radiation-resistant camera systems hardly gives rise to the expectation of research funds becoming available for the development of new radiation-resistant optical systems for picture taking and viewing. Current efforts are devoted mainly to improvements of image evaluation and image quality. Other items on the agendas of manufacturers are the reduction in camera size, which is limited by the size of picture tubes, and the increased use of commercial CCD cameras together with adequate shieldings or improved lenses. Consideration is also being given to the use of periphery equipment and to data transmission by LAN, WAN, or Internet links to remote locations. (orig.)

  18. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  19. Self-controlled video feedback on tactical skills for soccer teams results in more active involvement of players.

    Science.gov (United States)

    van Maarseveen, Mariëtte J J; Oudejans, Raôul R D; Savelsbergh, Geert J P

    2018-02-01

    Many studies have shown that self-controlled feedback is beneficial for learning motor tasks, and that learners prefer to receive feedback after supposedly good trials. However, to date all studies conducted on self-controlled learning have used individual tasks and mainly relatively simple skills. Therefore, the aim of this study was to examine self-controlled feedback on tactical skills in small-sided soccer games. Highly talented youth soccer players were assigned to a self-control or yoked group and received video feedback on their offensive performance in 3 vs. 2 small-sided games. The results showed that the self-control group requested feedback mostly after good trials, that is, after they scored a goal. In addition, the perceived performance of the self-control group was higher on feedback than on no-feedback trials. Analyses of the conversations around the video feedback revealed that the players and coach discussed good and poor elements of performance and how to improve it. Although the coach had a major role in these conversations, the players of the self-control group spoke more and showed more initiative compared to the yoked group. The results revealed no significant beneficial effect of self-controlled feedback on performance as judged by the coach. Overall, the findings suggest that in such a complex situation as small-sided soccer games, self-controlled feedback is used both to confirm correct performance elements and to determine and correct errors, and that self-controlled learning stimulates the involvement of the learner in the learning process. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Control and Innovation on Digital Platforms : the case of Netflix and streaming of video content

    OpenAIRE

    Vigeland, Eirik

    2012-01-01

    In this thesis I investigate innovation processes on innovation platforms, and look at the role played by content release for innovation in digital distribution of home entertainment. I argue that innovation platforms rely on several aspects of innovation in order to succeed, and this thesis is concerned with one of these, namely release of digital entertainment content. I use the American video streaming service Netflix as a case and example of such an innovation platform. By using techno...

  1. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  2. Brain training with non-action video games enhances aspects of cognition in older adults: a randomized controlled trial

    Science.gov (United States)

    Ballesteros, Soledad; Prieto, Antonio; Mayas, Julia; Toril, Pilar; Pita, Carmen; Ponce de León, Laura; Reales, José M.; Waterworth, John

    2014-01-01

    Age-related cognitive and brain declines can result in functional deterioration in many cognitive domains, dependency, and dementia. A major goal of aging research is to investigate methods that help to maintain brain health, cognition, independent living and wellbeing in older adults. This randomized controlled study investigated the effects of 20 1-h non-action video game training sessions with games selected from a commercially available package (Lumosity) on a series of age-declined cognitive functions and subjective wellbeing. Two groups of healthy older adults participated in the study, the experimental group who received the training and the control group who attended three meetings with the research team along the study. Groups were similar at baseline on demographics, vocabulary, global cognition, and depression status. All participants were assessed individually before and after the intervention, or a similar period of time, using neuropsychological tests and laboratory tasks to investigate possible transfer effects. The results showed significant improvements in the trained group, and no variation in the control group, in processing speed (choice reaction time), attention (reduction of distraction and increase of alertness), immediate and delayed visual recognition memory, as well as a trend to improve in Affection and Assertivity, two dimensions of the Wellbeing Scale. Visuospatial working memory (WM) and executive control (shifting strategy) did not improve. Overall, the current results support the idea that training healthy older adults with non-action video games will enhance some cognitive abilities but not others. PMID:25352805

  3. Brain training with non-action video games enhances aspects of cognition in older adults: a randomized controlled trial.

    Science.gov (United States)

    Ballesteros, Soledad; Prieto, Antonio; Mayas, Julia; Toril, Pilar; Pita, Carmen; Ponce de León, Laura; Reales, José M; Waterworth, John

    2014-01-01

    Age-related cognitive and brain declines can result in functional deterioration in many cognitive domains, dependency, and dementia. A major goal of aging research is to investigate methods that help to maintain brain health, cognition, independent living and wellbeing in older adults. This randomized controlled study investigated the effects of 20 1-h non-action video game training sessions with games selected from a commercially available package (Lumosity) on a series of age-declined cognitive functions and subjective wellbeing. Two groups of healthy older adults participated in the study, the experimental group who received the training and the control group who attended three meetings with the research team along the study. Groups were similar at baseline on demographics, vocabulary, global cognition, and depression status. All participants were assessed individually before and after the intervention, or a similar period of time, using neuropsychological tests and laboratory tasks to investigate possible transfer effects. The results showed significant improvements in the trained group, and no variation in the control group, in processing speed (choice reaction time), attention (reduction of distraction and increase of alertness), immediate and delayed visual recognition memory, as well as a trend to improve in Affection and Assertivity, two dimensions of the Wellbeing Scale. Visuospatial working memory (WM) and executive control (shifting strategy) did not improve. Overall, the current results support the idea that training healthy older adults with non-action video games will enhance some cognitive abilities but not others.

  4. Video Field Studies with your Cell Phone

    DEFF Research Database (Denmark)

    Buur, Jacob; Fraser, Euan

    2010-01-01

    Pod? Or with the GoPRO sports camera? Our approach has a strong focus on how to use video in design, rather than on the technical side. The goal is to engage design teams in meaningful discussions based on user empathy, rather than to produce beautiful videos. Basically it is a search for a minimalist way...

  5. Script Design for Information Film and Video.

    Science.gov (United States)

    Shelton, S. M. (Marty); And Others

    1993-01-01

    Shows how the empathy created in the audience by each of the five genres of film/video is a function of the five elements of film design: camera angle, close up, composition, continuity, and cutting. Discusses film/video script designing. Illustrates these concepts with a sample script and story board. (SR)

  6. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera...

  7. Video systems for alarm assessment

    International Nuclear Information System (INIS)

    Greenwoll, D.A.; Matter, J.C.; Ebel, P.E.

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs

  8. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  9. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  10. Implementation of an image acquisition and processing system based on FlexRIO, CameraLink and areaDetector

    Energy Technology Data Exchange (ETDEWEB)

    Esquembri, S.; Ruiz, M. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Barrera, E., E-mail: eduardo.barrera@upm.es [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Sanz, D.; Bustos, A. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Castro, R.; Vega, J. [National Fusion Laboratory, CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • The system presented acquires and process images from any CameraLink compliant camera. • The frame grabber implanted with FlexRIO technology have image time stamping and preprocessing capabilities. • The system is integrated into EPICS using areaDetector for a flexible configuration of image the acquisition and processing chain. • Is fully compatible with the architecture of the ITER Fast Controllers. - Abstract: Image processing systems are commonly used in current physics experiments, such as nuclear fusion experiments. These experiments usually require multiple cameras with different resolutions, framerates and, frequently, different software drivers. The integration of heterogeneous types of cameras without a unified hardware and software interface increases the complexity of the acquisition system. This paper presents the implementation of a distributed image acquisition and processing system for CameraLink cameras. This system implements a camera frame grabber using Field Programmable Gate Arrays (FPGAs), a reconfigurable hardware platform that allows for image acquisition and real-time preprocessing. The frame grabber is integrated into Experimental Physics and Industrial Control System (EPICS) using the areaDetector EPICS software module, which offers a common interface shared among tens of cameras to configure the image acquisition and process these images in a distributed control system. The use of areaDetector also allows the image processing to be parallelized and concatenated using: multiple computers; areaDetector plugins; and the areaDetector standard type for data, NDArrays. The architecture developed is fully compatible with ITER Fast Controllers and the entire system has been validated using a camera hardware simulator that stream videos from fusion experiment databases.

  11. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  12. Plasticity of attentional functions in older adults after non-action video game training: a randomized controlled trial.

    Science.gov (United States)

    Mayas, Julia; Parmentier, Fabrice B R; Andrés, Pilar; Ballesteros, Soledad

    2014-01-01

    A major goal of recent research in aging has been to examine cognitive plasticity in older adults and its capacity to counteract cognitive decline. The aim of the present study was to investigate whether older adults could benefit from brain training with video games in a cross-modal oddball task designed to assess distraction and alertness. Twenty-seven healthy older adults participated in the study (15 in the experimental group, 12 in the control group. The experimental group received 20 1-hr video game training sessions using a commercially available brain-training package (Lumosity) involving problem solving, mental calculation, working memory and attention tasks. The control group did not practice this package and, instead, attended meetings with the other members of the study several times along the course of the study. Both groups were evaluated before and after the intervention using a cross-modal oddball task measuring alertness and distraction. The results showed a significant reduction of distraction and an increase of alertness in the experimental group and no variation in the control group. These results suggest neurocognitive plasticity in the old human brain as training enhanced cognitive performance on attentional functions. ClinicalTrials.gov NCT02007616.

  13. Plasticity of attentional functions in older adults after non-action video game training: a randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Julia Mayas

    Full Text Available A major goal of recent research in aging has been to examine cognitive plasticity in older adults and its capacity to counteract cognitive decline. The aim of the present study was to investigate whether older adults could benefit from brain training with video games in a cross-modal oddball task designed to assess distraction and alertness. Twenty-seven healthy older adults participated in the study (15 in the experimental group, 12 in the control group. The experimental group received 20 1-hr video game training sessions using a commercially available brain-training package (Lumosity involving problem solving, mental calculation, working memory and attention tasks. The control group did not practice this package and, instead, attended meetings with the other members of the study several times along the course of the study. Both groups were evaluated before and after the intervention using a cross-modal oddball task measuring alertness and distraction. The results showed a significant reduction of distraction and an increase of alertness in the experimental group and no variation in the control group. These results suggest neurocognitive plasticity in the old human brain as training enhanced cognitive performance on attentional functions.ClinicalTrials.gov NCT02007616.

  14. Video essay

    DEFF Research Database (Denmark)

    2015-01-01

    Camera movement has a profound influence on the way films look and the way films are experienced by spectators. In this visual essay Jakob Isak Nielsen proposes six major functions of camera movement in narrative cinema. Individual camera movements may serve more of these functions at the same time...

  15. Effects of video-feedback on the communication, clinical competence and motivational interviewing skills of practice nurses: a pre-test posttest control group study.

    Science.gov (United States)

    Noordman, Janneke; van der Weijden, Trudy; van Dulmen, Sandra

    2014-10-01

    To examine the effects of individual video-feedback on the generic communication skills, clinical competence (i.e. adherence to practice guidelines) and motivational interviewing skills of experienced practice nurses working in primary care. Continuing professional education may be necessary to refresh and reflect on the communication and motivational interviewing skills of experienced primary care practice nurses. A video-feedback method was designed to improve these skills. Pre-test/posttest control group design. Seventeen Dutch practice nurses and 325 patients participated between June 2010-June 2011. Nurse-patient consultations were videotaped at two moments (T0 and T1), with an interval of 3-6 months. The videotaped consultations were rated using two protocols: the Maastrichtse Anamnese en Advies Scorelijst met globale items (MAAS-global) and the Behaviour Change Counselling Index. Before the recordings, nurses were allocated to a control or video-feedback group. Nurses allocated to the video-feedback group received video-feedback between T0 and T1. Data were analysed using multilevel linear or logistic regression. Nurses who received video-feedback appeared to pay significantly more attention to patients' request for help, their physical examination and gave significantly more understandable information. With respect to motivational interviewing, nurses who received video-feedback appeared to pay more attention to 'agenda setting and permission seeking' during their consultations. Video-feedback is a potentially effective method to improve practice nurses' generic communication skills. Although a single video-feedback session does not seem sufficient to increase all motivational interviewing skills, significant improvement in some specific skills was found. Nurses' clinical competences were not altered after feedback due to already high standards. © 2014 John Wiley & Sons Ltd.

  16. Video pedagogy

    OpenAIRE

    Länsitie, Janne; Stevenson, Blair; Männistö, Riku; Karjalainen, Tommi; Karjalainen, Asko

    2016-01-01

    The short film is an introduction to the concept of video pedagogy. The five categories of video pedagogy further elaborate how videos can be used as a part of instruction and learning process. Most pedagogical videos represent more than one category. A video itself doesn’t necessarily define the category – the ways in which the video is used as a part of pedagogical script are more defining factors. What five categories did you find? Did you agree with the categories, or are more...

  17. Control Design and Digital Implementation of a Fast 2-Degree-of-Freedom Translational Optical Image Stabilizer for Image Sensors in Mobile Camera Phones.

    Science.gov (United States)

    Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P

    2017-10-13

    This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.

  18. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  19. An integrated port camera and display system for laparoscopy.

    Science.gov (United States)

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  20. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.