WorldWideScience

Sample records for video tracking system

  1. Video-based Chinese Input System via Fingertip Tracking

    Directory of Open Access Journals (Sweden)

    Chih-Chang Yu

    2012-10-01

    Full Text Available In this paper, we propose a system to detect and track fingertips online and recognize Mandarin Phonetic Symbol (MPS for user-friendly Chinese input purposes. Using fingertips and cameras to replace pens and touch panels as input devices could reduce the cost and improve the ease-of-use and comfort of computer-human interface. In the proposed framework, particle filters with enhanced appearance models are applied for robust fingertip tracking. Afterwards, MPS combination recognition is performed on the tracked fingertip trajectories using Hidden Markov Models. In the proposed system, the fingertips of the users could be robustly tracked. Also, the challenges of entering, leaving and virtual strokes caused by video-based fingertip input can be overcome. Experimental results have shown the feasibility and effectiveness of the proposed work.

  2. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  3. A System based on Adaptive Background Subtraction Approach for Moving Object Detection and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Bahadır KARASULU

    2013-04-01

    Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study

  4. Evaluation of a video-based head motion tracking system for dedicated brain PET

    Science.gov (United States)

    Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.

    2015-03-01

    Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.

  5. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  6. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Science.gov (United States)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  7. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  8. AUTOMATIC FAST VIDEO OBJECT DETECTION AND TRACKING ON VIDEO SURVEILLANCE SYSTEM

    Directory of Open Access Journals (Sweden)

    V. Arunachalam

    2012-08-01

    Full Text Available This paper describes the advance techniques for object detection and tracking in video. Most visual surveillance systems start with motion detection. Motion detection methods attempt to locate connected regions of pixels that represent the moving objects within the scene; different approaches include frame-to-frame difference, background subtraction and motion analysis. The motion detection can be achieved by Principle Component Analysis (PCA and then separate an objects from background using background subtraction. The detected object can be segmented. Segmentation consists of two schemes: one for spatial segmentation and the other for temporal segmentation. Tracking approach can be done in each frame of detected Object. Pixel label problem can be alleviated by the MAP (Maximum a Posteriori technique.

  9. A digital video tracking system

    Science.gov (United States)

    Giles, M. K.

    1980-01-01

    The Real-Time Videotheodolite (RTV) was developed in connection with the requirement to replace film as a recording medium to obtain the real-time location of an object in the field-of-view (FOV) of a long focal length theodolite. Design philosophy called for a system capable of discriminatory judgment in identifying the object to be tracked with 60 independent observations per second, capable of locating the center of mass of the object projection on the image plane within about 2% of the FOV in rapidly changing background/foreground situations, and able to generate a predicted observation angle for the next observation. A description is given of a number of subsystems of the RTV, taking into account the processor configuration, the video processor, the projection processor, the tracker processor, the control processor, and the optics interface and imaging subsystem.

  10. GPS-Aided Video Tracking

    Directory of Open Access Journals (Sweden)

    Udo Feuerhake

    2015-08-01

    Full Text Available Tracking moving objects is both challenging and important for a large variety of applications. Different technologies based on the global positioning system (GPS and video or radio data are used to obtain the trajectories of the observed objects. However, in some use cases, they fail to provide sufficiently accurate, complete and correct data at the same time. In this work we present an approach for fusing GPS- and video-based tracking in order to exploit their individual advantages. In this way we aim to combine the reliability of GPS tracking with the high geometric accuracy of camera detection. For the fusion of the movement data provided by the different devices we use a hidden Markov model (HMM formulation and the Viterbi algorithm to extract the most probable trajectories. In three experiments, we show that our approach is able to deal with challenging situations like occlusions or objects which are temporarily outside the monitored area. The results show the desired increase in terms of accuracy, completeness and correctness.

  11. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation

    OpenAIRE

    McCall, J C; Trivedi, Mohan Manubhai

    2006-01-01

    Driver-assistance systems that monitor driver intent, warn drivers of lane departures, or assist in vehicle guidance are all being actively considered. It is therefore important to take a critical look at key aspects of these systems, one of which is lane-position tracking. It is for these driver-assistance objectives that motivate the development of the novel "video-based lane estimation and tracking" (VioLET) system. The system is designed using steerable filters for robust and accurate lan...

  12. Performance evaluation software moving object detection and tracking in videos

    CERN Document Server

    Karasulu, Bahadir

    2013-01-01

    Performance Evaluation Software: Moving Object Detection and Tracking in Videos introduces a software approach for the real-time evaluation and performance comparison of the methods specializing in moving object detection and/or tracking (D&T) in video processing. Digital video content analysis is an important item for multimedia content-based indexing (MCBI), content-based video retrieval (CBVR) and visual surveillance systems. There are some frequently-used generic algorithms for video object D&T in the literature, such as Background Subtraction (BS), Continuously Adaptive Mean-shift (CMS),

  13. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  14. Multi-view video segmentation and tracking for video surveillance

    Science.gov (United States)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  15. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  16. Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

    Science.gov (United States)

    Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2009-06-15

    Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.

  17. Technology survey on video face tracking

    Science.gov (United States)

    Zhang, Tong; Gomes, Herman Martins

    2014-03-01

    With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.

  18. Visualization of ground truth tracks for the video 'Tracking a "facer's" behavior in a public plaza'

    DEFF Research Database (Denmark)

    2015-01-01

    The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager.......The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager....

  19. Video-based measurements for wireless capsule endoscope tracking

    International Nuclear Information System (INIS)

    Spyrou, Evaggelos; Iakovidis, Dimitris K

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions. (paper)

  20. Video-based measurements for wireless capsule endoscope tracking

    Science.gov (United States)

    Spyrou, Evaggelos; Iakovidis, Dimitris K.

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.

  1. Kalman Filter Based Tracking in an Video Surveillance System

    Directory of Open Access Journals (Sweden)

    SULIMAN, C.

    2010-05-01

    Full Text Available In this paper we have developed a Matlab/Simulink based model for monitoring a contact in a video surveillance sequence. For the segmentation process and corect identification of a contact in a surveillance video, we have used the Horn-Schunk optical flow algorithm. The position and the behavior of the correctly detected contact were monitored with the help of the traditional Kalman filter. After that we have compared the results obtained from the optical flow method with the ones obtained from the Kalman filter, and we show the correct functionality of the Kalman filter based tracking. The tests were performed using video data taken with the help of a fix camera. The tested algorithm has shown promising results.

  2. Fast-track video-assisted thoracoscopic surgery

    DEFF Research Database (Denmark)

    Holbek, Bo Laksafoss; Petersen, René Horsleben; Kehlet, Henrik

    2016-01-01

    Objectives To provide a short overview of fast-track video-assisted thoracoscopic surgery (VATS) and to identify areas requiring further research. Design A literature search was made using key words including: fast-track, enhanced recovery, video-assisted thoracoscopic surgery, robot......-assisted thoracoscopic surgery (RATS), robotic, thoracotomy, single-incision, uniportal, natural orifice transluminal endoscopic surgery (NOTES), chest tube, air-leak, digital drainage, pain management, analgesia, perioperative management, anaesthesia and non-intubated. References from articles were screened for further...

  3. Robust feedback zoom tracking for digital video surveillance.

    Science.gov (United States)

    Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong

    2012-01-01

    Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance.

  4. EVA: laparoscopic instrument tracking based on Endoscopic Video Analysis for psychomotor skills assessment.

    Science.gov (United States)

    Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J

    2013-03-01

    The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.

  5. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    Directory of Open Access Journals (Sweden)

    Riad I. Hammoud

    2014-10-01

    Full Text Available We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA and multi-media indexing and explorer (MINER. VIVA utilizes analyst call-outs (ACOs in the form of chat messages (voice-to-text to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1 a fusion of graphical track and text data using probabilistic methods; (2 an activity pattern learning framework to support querying an index of activities of interest (AOIs and targets of interest (TOIs by movement type and geolocation; and (3 a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV. VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  6. Automatic association of chats and video tracks for activity learning and recognition in aerial video surveillance.

    Science.gov (United States)

    Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao

    2014-10-22

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  7. Near real-time bi-planar fluoroscopic tracking system for the video tumor fighter

    International Nuclear Information System (INIS)

    Lawson, M.A.; Wika, K.G.; Gillies, G.T.; Ritter, R.C.

    1991-01-01

    The authors have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the Video Tumor Fighter (VTF). The software was written for s Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally- oriented, visible-light cameras and simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented

  8. High-throughput phenotyping of plant resistance to aphids by automated video tracking.

    Science.gov (United States)

    Kloth, Karen J; Ten Broeke, Cindy Jm; Thoen, Manus Pm; Hanhart-van den Brink, Marianne; Wiegers, Gerrie L; Krips, Olga E; Noldus, Lucas Pjj; Dicke, Marcel; Jongsma, Maarten A

    2015-01-01

    Piercing-sucking insects are major vectors of plant viruses causing significant yield losses in crops. Functional genomics of plant resistance to these insects would greatly benefit from the availability of high-throughput, quantitative phenotyping methods. We have developed an automated video tracking platform that quantifies aphid feeding behaviour on leaf discs to assess the level of plant resistance. Through the analysis of aphid movement, the start and duration of plant penetrations by aphids were estimated. As a case study, video tracking confirmed the near-complete resistance of lettuce cultivar 'Corbana' against Nasonovia ribisnigri (Mosely), biotype Nr:0, and revealed quantitative resistance in Arabidopsis accession Co-2 against Myzus persicae (Sulzer). The video tracking platform was benchmarked against Electrical Penetration Graph (EPG) recordings and aphid population development assays. The use of leaf discs instead of intact plants reduced the intensity of the resistance effect in video tracking, but sufficiently replicated experiments resulted in similar conclusions as EPG recordings and aphid population assays. One video tracking platform could screen 100 samples in parallel. Automated video tracking can be used to screen large plant populations for resistance to aphids and other piercing-sucking insects.

  9. Tracking and recognition face in videos with incremental local sparse representation model

    Science.gov (United States)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  10. A high precision video-electronic measuring system for use with solid state track detectors

    International Nuclear Information System (INIS)

    Schott, J.U.; Schopper, E.; Staudte, R.

    1976-01-01

    A video-electronic image analyzing system Quantimet 720 has been modified to meet the requirements of the measurement of tracks of nuclear particles in solid state track detectors with resulting improvement of precision, speed, and the elimination of subjective influences. A microscope equipped with an automatic XY stage projects the image onto the cathode of a vidicon-amplifier. Within the TV-picture generated, characterized by the coordinate XY in the specimen, we determine coordinates xy of events by setting cross lines on the screen which correspond to a digital accuracy of 0.1 μm at the position of the object. Automatic movement in Z-direction can be performed by stepping motor and measured electronically, or continously by setting electric voltage on a piezostrictive support of the objective. (orig.) [de

  11. Linear array of photodiodes to track a human speaker for video recording

    International Nuclear Information System (INIS)

    DeTone, D; Neal, H; Lougheed, R

    2012-01-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant– the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting–a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  12. Linear array of photodiodes to track a human speaker for video recording

    Science.gov (United States)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  13. A survey on the automatic object tracking technology using video signals

    International Nuclear Information System (INIS)

    Lee, Jae Cheol; Jun, Hyeong Seop; Choi, Yu Rak; Kim, Jae Hee

    2003-01-01

    Recently, automatic identification and tracking of the object are actively studied according to the rapid development of signal processing and vision technology using improved hardware and software. The object tracking technology can be applied to various fields such as road watching of the vehicles, weather satellite, traffic observation, intelligent remote video-conferences and autonomous mobile robots. Object tracking system receives subsequent pictures from the camera and detects motions of the objects in these pictures. In this report, we investigate various object tracking techniques such as brightness change using histogram characteristic, differential image analysis, contour and feature extraction, and try to find proper methods that can be used to mobile robots actually

  14. INFLUENCE OF STOCHASTIC NOISE STATISTICS ON KALMAN FILTER PERFORMANCE BASED ON VIDEO TARGET TRACKING

    Institute of Scientific and Technical Information of China (English)

    Chen Ken; Napolitano; Zhang Yun; Li Dong

    2010-01-01

    The system stochastic noises involved in Kalman filtering are preconditioned on being ideally white and Gaussian distributed. In this research,efforts are exerted on exploring the influence of the noise statistics on Kalman filtering from the perspective of video target tracking quality. The correlation of tracking precision to both the process and measurement noise covariance is investigated; the signal-to-noise power density ratio is defined; the contribution of predicted states and measured outputs to Kalman filter behavior is discussed; the tracking precision relative sensitivity is derived and applied in this study case. The findings are expected to pave the way for future study on how the actual noise statistics deviating from the assumed ones impacts on the Kalman filter optimality and degradation in the application of video tracking.

  15. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system

    Energy Technology Data Exchange (ETDEWEB)

    Ebe, Kazuyu, E-mail: nrr24490@nifty.com; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji [Joetsu General Hospital, 616 Daido-Fukuda, Joetsu-shi, Niigata 943-8507 (Japan); Sugimoto, Satoru [Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421 (Japan); Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi [Graduate School of Medical and Dental Sciences, Niigata University, Niigata 951-8510 (Japan); Court, Laurence [The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)

    2015-08-15

    Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors

  16. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system

    International Nuclear Information System (INIS)

    Ebe, Kazuyu; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji; Sugimoto, Satoru; Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi; Court, Laurence

    2015-01-01

    Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors

  17. User-assisted video segmentation system for visual communication

    Science.gov (United States)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  18. RGBD Video Based Human Hand Trajectory Tracking and Gesture Recognition System

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2015-01-01

    Full Text Available The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF, is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.

  19. Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding

    Directory of Open Access Journals (Sweden)

    Xin Li

    2014-06-01

    Full Text Available Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians, especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach.

  20. Image and video based remote target localization and tracking on smartphones

    Science.gov (United States)

    Wang, Qia; Lobzhanidze, Alex; Jang, Hyun; Zeng, Wenjun; Shang, Yi; Yang, Jingyu

    2012-06-01

    Smartphones are becoming popular nowadays not only because of its communication functionality but also, more importantly, its powerful sensing and computing capability. In this paper, we describe a novel and accurate image and video based remote target localization and tracking system using the Android smartphones, by leveraging its built-in sensors such as camera, digital compass, GPS, etc. Even though many other distance estimation or localization devices are available, our all-in-one, easy-to-use localization and tracking system on low cost and commodity smartphones is first of its kind. Furthermore, smartphones' exclusive user-friendly interface has been effectively taken advantage of by our system to facilitate low complexity and high accuracy. Our experimental results show that our system works accurately and efficiently.

  1. ‘PhysTrack’: a Matlab based environment for video tracking of kinematics in the physics laboratory

    Science.gov (United States)

    Umar Hassan, Muhammad; Sabieh Anwar, Muhammad

    2017-07-01

    In the past two decades, several computer software tools have been developed to investigate the motion of moving bodies in physics laboratories. In this article we report a Matlab based video tracking library, PhysTrack, primarily designed to investigate kinematics. We compare PhysTrack with other commonly available video tracking tools and outline its salient features. The general methodology of the whole video tracking process is described with a step by step explanation of several functionalities. Furthermore, results of some real physics experiments are also provided to demonstrate the working of the automated video tracking, data extraction, data analysis and presentation tools that come with this development environment. We believe that PhysTrack will be valuable for the large community of physics teachers and students already employing Matlab.

  2. A new user-assisted segmentation and tracking technique for an object-based video editing system

    Science.gov (United States)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  3. Intra-system reliability of SICS: video-tracking system (Digital.Stadium®) for performance analysis in football.

    Science.gov (United States)

    Beato, Marco; Jamil, Mikael

    2017-05-09

    The correct evaluation of external load parameters is a key factor in professional football. The instrumentations usually utilised to quantify the external load parameters during official matches are Video-Tracking Systems (VTS). VTS is a technology that records two- dimensional position data (x and y) at high sampling rates (over 25 Hz). The aim of this study was to evaluate the intra-system reliability of Digital.Stadium® VTS. 28 professional male football players taking part in the Italian Serie A (age 24 ± 6 years, body mass 79.5 ± 7.8 kg, stature 1.83 ± 0.05 m) during the 2015/16 season were enrolled in this study (Team A and Team B). Video-analysis was done during an official match and data analysis was performed immediately after the game ended and then replicated a week later. This study reported a near perfect relationship between the initial analysis (analysis 1) and the replicated analysis undertaken a week later (analysis 2). R2 coefficients were highly significant for each of the performance parameters, p power of 9.65 ± 1.64 w kg-1 and 9.58 ± 1.61 w kg-1, in analysis 1 and analysis 2, respectively. The findings reported in this study underlined that all data reported by Digital.Stadium® VTS showed high levels of absolute and relative reliability.

  4. A Coupled Hidden Markov Random Field Model for Simultaneous Face Clustering and Tracking in Videos

    KAUST Repository

    Wu, Baoyuan

    2016-10-25

    Face clustering and face tracking are two areas of active research in automatic facial video processing. They, however, have long been studied separately, despite the inherent link between them. In this paper, we propose to perform simultaneous face clustering and face tracking from real world videos. The motivation for the proposed research is that face clustering and face tracking can provide useful information and constraints to each other, thus can bootstrap and improve the performances of each other. To this end, we introduce a Coupled Hidden Markov Random Field (CHMRF) to simultaneously model face clustering, face tracking, and their interactions. We provide an effective algorithm based on constrained clustering and optimal tracking for the joint optimization of cluster labels and face tracking. We demonstrate significant improvements over state-of-the-art results in face clustering and tracking on several videos.

  5. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  6. Tracking of Individuals in Very Long Video Sequences

    DEFF Research Database (Denmark)

    Fihl, Preben; Corlin, Rasmus; Park, Sangho

    2006-01-01

    In this paper we present an approach for automatically detecting and tracking humans in very long video sequences. The detection is based on background subtraction using a multi-mode Codeword method. We enhance this method both in terms of representation and in terms of automatically updating...

  7. Multiscale Architectures and Parallel Algorithms for Video Object Tracking

    Science.gov (United States)

    2011-10-01

    larger number of cores using the IBM QS22 Blade for handling higher video processing workloads (but at higher cost per core), low power consumption and...Cell/B.E. Blade processors which have a lot more main memory but also higher power consumption . More detailed performance figures for HD and SD video...Parallelism in Algorithms and Architectures, pages 289–298, 2007. [3] S. Ali and M. Shah. COCOA - Tracking in aerial imagery. In Daniel J. Henry

  8. Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-05-01

    Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.

  9. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    Science.gov (United States)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  10. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Lerouge Sam

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .

  11. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Rik Van de Walle

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.

  12. Real-time logo detection and tracking in video

    Science.gov (United States)

    George, M.; Kehtarnavaz, N.; Rahman, M.; Carlsohn, M.

    2010-05-01

    This paper presents a real-time implementation of a logo detection and tracking algorithm in video. The motivation of this work stems from applications on smart phones that require the detection of logos in real-time. For example, one application involves detecting company logos so that customers can easily get special offers in real-time. This algorithm uses a hybrid approach by initially running the Scale Invariant Feature Transform (SIFT) algorithm on the first frame in order to obtain the logo location and then by using an online calibration of color within the SIFT detected area in order to detect and track the logo in subsequent frames in a time efficient manner. The results obtained indicate that this hybrid approach allows robust logo detection and tracking to be achieved in real-time.

  13. Automation system for optical counting of nuclear tracks

    Energy Technology Data Exchange (ETDEWEB)

    Boulyga, S.F.; Boulyga, E.G.; Lomonosova, E.M.; Zhuk, I.V

    1999-06-01

    An automation system consisting of the microscope, video camera and Pentium PC with frame recorder was created. The system provides counting of nuclear tracks on the SSNTD surface with a resolution of 752 x 582 points, determination of the surface area and main axis of the track. The pattern recognition program was developed for operation in Windows 3.1 (or higher) ensuring a convenient interface with the user. In a comparison of the results on automatic track counting with the more accurate hand mode it was shown that the program enables the tracks to be detected even on images with a rather high noise level. It ensures a high accuracy of track counting being comparable with the accuracy of manual counting for densities of tracks in the range of up to 2{center_dot}10{sup 5} tracks/cm{sup 2}. The automatic system was applied in the experimental investigation of uranium and transuranium elements.

  14. Automation system for optical counting of nuclear tracks

    International Nuclear Information System (INIS)

    Boulyga, S.F.; Boulyga, E.G.; Lomonosova, E.M.; Zhuk, I.V.

    1999-01-01

    An automation system consisting of the microscope, video camera and Pentium PC with frame recorder was created. The system provides counting of nuclear tracks on the SSNTD surface with a resolution of 752 x 582 points, determination of the surface area and main axis of the track. The pattern recognition program was developed for operation in Windows 3.1 (or higher) ensuring a convenient interface with the user. In a comparison of the results on automatic track counting with the more accurate hand mode it was shown that the program enables the tracks to be detected even on images with a rather high noise level. It ensures a high accuracy of track counting being comparable with the accuracy of manual counting for densities of tracks in the range of up to 2·10 5 tracks/cm 2 . The automatic system was applied in the experimental investigation of uranium and transuranium elements

  15. Automation system for optical counting of nuclear tracks

    CERN Document Server

    Boulyga, S F; Lomonosova, E M; Zhuk, I V

    1999-01-01

    An automation system consisting of the microscope, video camera and Pentium PC with frame recorder was created. The system provides counting of nuclear tracks on the SSNTD surface with a resolution of 752 x 582 points, determination of the surface area and main axis of the track. The pattern recognition program was developed for operation in Windows 3.1 (or higher) ensuring a convenient interface with the user. In a comparison of the results on automatic track counting with the more accurate hand mode it was shown that the program enables the tracks to be detected even on images with a rather high noise level. It ensures a high accuracy of track counting being comparable with the accuracy of manual counting for densities of tracks in the range of up to 2 centre dot 10 sup 5 tracks/cm sup 2. The automatic system was applied in the experimental investigation of uranium and transuranium elements.

  16. Persistent Aerial Tracking system for UAVs

    KAUST Repository

    Mueller, Matthias; Sharma, Gopal; Smith, Neil; Ghanem, Bernard

    2016-01-01

    In this paper, we propose a persistent, robust and autonomous object tracking system for unmanned aerial vehicles (UAVs) called Persistent Aerial Tracking (PAT). A computer vision and control strategy is applied to a diverse set of moving objects (e.g. humans, animals, cars, boats, etc.) integrating multiple UAVs with a stabilized RGB camera. A novel strategy is employed to successfully track objects over a long period, by ‘handing over the camera’ from one UAV to another. We evaluate several state-of-the-art trackers on the VIVID aerial video dataset and additional sequences that are specifically tailored to low altitude UAV target tracking. Based on the evaluation, we select the leading tracker and improve upon it by optimizing for both speed and performance, integrate the complete system into an off-the-shelf UAV, and obtain promising results showing the robustness of our solution in real-world aerial scenarios.

  17. Persistent Aerial Tracking system for UAVs

    KAUST Repository

    Mueller, Matthias

    2016-12-19

    In this paper, we propose a persistent, robust and autonomous object tracking system for unmanned aerial vehicles (UAVs) called Persistent Aerial Tracking (PAT). A computer vision and control strategy is applied to a diverse set of moving objects (e.g. humans, animals, cars, boats, etc.) integrating multiple UAVs with a stabilized RGB camera. A novel strategy is employed to successfully track objects over a long period, by ‘handing over the camera’ from one UAV to another. We evaluate several state-of-the-art trackers on the VIVID aerial video dataset and additional sequences that are specifically tailored to low altitude UAV target tracking. Based on the evaluation, we select the leading tracker and improve upon it by optimizing for both speed and performance, integrate the complete system into an off-the-shelf UAV, and obtain promising results showing the robustness of our solution in real-world aerial scenarios.

  18. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    Science.gov (United States)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  19. Qualitative Video Analysis of Track-Cycling Team Pursuit in World-Class Athletes.

    Science.gov (United States)

    Sigrist, Samuel; Maier, Thomas; Faiss, Raphael

    2017-11-01

    Track-cycling team pursuit (TP) is a highly technical effort involving 4 athletes completing 4 km from a standing start, often in less than 240 s. Transitions between athletes leading the team are obviously of utmost importance. To perform qualitative video analyses of transitions of world-class athletes in TP competitions. Videos captured at 100 Hz were recorded for 77 races (including 96 different athletes) in 5 international track-cycling competitions (eg, UCI World Cups and World Championships) and analyzed for the 12 best teams in the UCI Track Cycling TP Olympic ranking. During TP, 1013 transitions were evaluated individually to extract quantitative (eg, average lead time, transition number, length, duration, height in the curve) and qualitative (quality of transition start, quality of return at the back of the team, distance between third and returning rider score) variables. Determination of correlation coefficients between extracted variables and end time allowed assessment of relationships between variables and relevance of the video analyses. Overall quality of transitions and end time were significantly correlated (r = .35, P = .002). Similarly, transition distance (r = .26, P = .02) and duration (r = .35, P = .002) were positively correlated with end time. Conversely, no relationship was observed between transition number, average lead time, or height reached in the curve and end time. Video analysis of TP races highlights the importance of quality transitions between riders, with preferably swift and short relays rather than longer lead times for faster race times.

  20. Tracking of ball and players in beach volleyball videos.

    Directory of Open Access Journals (Sweden)

    Gabriel Gomez

    Full Text Available This paper presents methods for the determination of players' positions and contact time points by tracking the players and the ball in beach volleyball videos. Two player tracking methods are compared, a classical particle filter and a rigid grid integral histogram tracker. Due to mutual occlusion of the players and the camera perspective, results are best for the front players, with 74,6% and 82,6% of correctly tracked frames for the particle method and the integral histogram method, respectively. Results suggest an improved robustness against player confusion between different particle sets when tracking with a rigid grid approach. Faster processing and less player confusions make this method superior to the classical particle filter. Two different ball tracking methods are used that detect ball candidates from movement difference images using a background subtraction algorithm. Ball trajectories are estimated and interpolated from parabolic flight equations. The tracking accuracy of the ball is 54,2% for the trajectory growth method and 42,1% for the Hough line detection method. Tracking results of over 90% from the literature could not be confirmed. Ball contact frames were estimated from parabolic trajectory intersection, resulting in 48,9% of correctly estimated ball contact points.

  1. Tracking of Ball and Players in Beach Volleyball Videos

    Science.gov (United States)

    Gomez, Gabriel; Herrera López, Patricia; Link, Daniel; Eskofier, Bjoern

    2014-01-01

    This paper presents methods for the determination of players' positions and contact time points by tracking the players and the ball in beach volleyball videos. Two player tracking methods are compared, a classical particle filter and a rigid grid integral histogram tracker. Due to mutual occlusion of the players and the camera perspective, results are best for the front players, with 74,6% and 82,6% of correctly tracked frames for the particle method and the integral histogram method, respectively. Results suggest an improved robustness against player confusion between different particle sets when tracking with a rigid grid approach. Faster processing and less player confusions make this method superior to the classical particle filter. Two different ball tracking methods are used that detect ball candidates from movement difference images using a background subtraction algorithm. Ball trajectories are estimated and interpolated from parabolic flight equations. The tracking accuracy of the ball is 54,2% for the trajectory growth method and 42,1% for the Hough line detection method. Tracking results of over 90% from the literature could not be confirmed. Ball contact frames were estimated from parabolic trajectory intersection, resulting in 48,9% of correctly estimated ball contact points. PMID:25426936

  2. Quantitative analysis of spider locomotion employing computer-automated video tracking

    DEFF Research Database (Denmark)

    Baatrup, E; Bayley, M

    1993-01-01

    The locomotor activity of adult specimens of the wolf spider Pardosa amentata was measured in an open-field setup, using computer-automated colour object video tracking. The x,y coordinates of the animal in the digitized image of the test arena were recorded three times per second during four...

  3. ANNOTATION SUPPORTED OCCLUDED OBJECT TRACKING

    Directory of Open Access Journals (Sweden)

    Devinder Kumar

    2012-08-01

    Full Text Available Tracking occluded objects at different depths has become as extremely important component of study for any video sequence having wide applications in object tracking, scene recognition, coding, editing the videos and mosaicking. The paper studies the ability of annotation to track the occluded object based on pyramids with variation in depth further establishing a threshold at which the ability of the system to track the occluded object fails. Image annotation is applied on 3 similar video sequences varying in depth. In the experiment, one bike occludes the other at a depth of 60cm, 80cm and 100cm respectively. Another experiment is performed on tracking humans with similar depth to authenticate the results. The paper also computes the frame by frame error incurred by the system, supported by detailed simulations. This system can be effectively used to analyze the error in motion tracking and further correcting the error leading to flawless tracking. This can be of great interest to computer scientists while designing surveillance systems etc.

  4. Target Tracking of a Linear Time Invariant System under Irregular Sampling

    Directory of Open Access Journals (Sweden)

    Jin Xue-Bo

    2012-11-01

    Full Text Available Due to event-triggered sampling in a system, or maybe with the aim of reducing data storage, tracking many applications will encounter irregular sampling time. By calculating the matrix exponential using an inverse Laplace transform, this paper transforms the irregular sampling tracking problem to the problem of tracking with time-varying parameters of a system. Using the common Kalman filter, the developed method is used to track a target for the simulated trajectory and video tracking. The results of simulation experiments have shown that it can obtain good estimation performance even at a very high irregular rate of measurement sampling time.

  5. Compression of Video Tracking and Bandwidth Balancing Routing in Wireless Multimedia Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2015-12-01

    Full Text Available There has been a tremendous growth in multimedia applications over wireless networks. Wireless Multimedia Sensor Networks(WMSNs have become the premier choice in many research communities and industry. Many state-of-art applications, such as surveillance, traffic monitoring, and remote heath care are essentially video tracking and transmission in WMSNs. The transmission speed is constrained by the big file size of video data and fixed bandwidth allocation in constant routing paths. In this paper, we present a CamShift based algorithm to compress the tracking of videos. Then we propose a bandwidth balancing strategy in which each sensor node is able to dynamically select the node for the next hop with the highest potential bandwidth capacity to resume communication. Key to this strategy is that each node merely maintains two parameters that contain its historical bandwidth varying trend and then predict its near future bandwidth capacity. Then, the forwarding node selects the next hop with the highest potential bandwidth capacity. Simulations demonstrate that our approach significantly increases the data received by the sink node and decreases the delay on video transmission in Wireless Multimedia Sensor Network environments.

  6. A discriminative structural similarity measure and its application to video-volume registration for endoscope three-dimensional motion tracking.

    Science.gov (United States)

    Luo, Xiongbiao; Mori, Kensaku

    2014-06-01

    Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.

  7. Occlusion Handling in Videos Object Tracking: A Survey

    International Nuclear Information System (INIS)

    Lee, B Y; Liew, L H; Cheah, W S; Wang, Y C

    2014-01-01

    Object tracking in video has been an active research since for decades. This interest is motivated by numerous applications, such as surveillance, human-computer interaction, and sports event monitoring. Many challenges related to tracking objects still remain, this can arise due to abrupt object motion, changing appearance patterns of objects and the scene, non-rigid object structures and most significant are occlusion of tracked object be it object-to-object or object-to-scene occlusions. Generally, occlusion in object tracking occur under three situations: self-occlusion, inter-object occlusion by background scene structure. Self-occlusion occurs most frequently while tracking articulated objects when one part of the object occludes another. Inter-object occlusion occurs when two objects being tracked occlude each other whereas occlusion by the background occurs when a structure in the background occludes the tracked objects. Typically, tracking methods handle occlusion by modelling the object motion using linear and non-linear dynamic models. The derived models will be used to continuously predicting the object location when a tracked object is occluded until the object reappears. Example of these method are Kalman filtering and Particle filtering trackers. Researchers have also utilised other features to resolved occlusion, for example, silhouette projections, colour histogram and optical flow. We will present some result from a previously conducted experiment when tracking single object using Kalman filter, Particle filter and Mean Shift trackers under various occlusion situation in this paper. We will also review various other occlusion handling methods that involved using multiple cameras. In a nutshell, the goal of this paper is to discuss in detail the problem of occlusion in object tracking and review the state of the art occlusion handling methods, classify them into different categories, and identify new trends. Moreover, we discuss the important

  8. Occlusion Handling in Videos Object Tracking: A Survey

    Science.gov (United States)

    Lee, B. Y.; Liew, L. H.; Cheah, W. S.; Wang, Y. C.

    2014-02-01

    Object tracking in video has been an active research since for decades. This interest is motivated by numerous applications, such as surveillance, human-computer interaction, and sports event monitoring. Many challenges related to tracking objects still remain, this can arise due to abrupt object motion, changing appearance patterns of objects and the scene, non-rigid object structures and most significant are occlusion of tracked object be it object-to-object or object-to-scene occlusions. Generally, occlusion in object tracking occur under three situations: self-occlusion, inter-object occlusion by background scene structure. Self-occlusion occurs most frequently while tracking articulated objects when one part of the object occludes another. Inter-object occlusion occurs when two objects being tracked occlude each other whereas occlusion by the background occurs when a structure in the background occludes the tracked objects. Typically, tracking methods handle occlusion by modelling the object motion using linear and non-linear dynamic models. The derived models will be used to continuously predicting the object location when a tracked object is occluded until the object reappears. Example of these method are Kalman filtering and Particle filtering trackers. Researchers have also utilised other features to resolved occlusion, for example, silhouette projections, colour histogram and optical flow. We will present some result from a previously conducted experiment when tracking single object using Kalman filter, Particle filter and Mean Shift trackers under various occlusion situation in this paper. We will also review various other occlusion handling methods that involved using multiple cameras. In a nutshell, the goal of this paper is to discuss in detail the problem of occlusion in object tracking and review the state of the art occlusion handling methods, classify them into different categories, and identify new trends. Moreover, we discuss the important

  9. Using Genetic Algorithm for Eye Detection and Tracking in Video Sequence

    Directory of Open Access Journals (Sweden)

    Takuya Akashi

    2007-04-01

    Full Text Available We propose a high-speed size and orientation invariant eye tracking method, which can acquire numerical parameters to represent the size and orientation of the eye. In this paper, we discuss that high tolerance in human head movement and real-time processing that are needed for many applications, such as eye gaze tracking. The generality of the method is also important. We use template matching with genetic algorithm, in order to overcome these problems. A high speed and accuracy tracking scheme using Evolutionary Video Processing for eye detection and tracking is proposed. Usually, a genetic algorithm is unsuitable for a real-time processing, however, we achieved real-time processing. The generality of this proposed method is provided by the artificial iris template used. In our simulations, an eye tracking accuracy is 97.9% and, an average processing time of 28 milliseconds per frame.

  10. The Siegen automatic measuring system for nuclear track detectors: new developments

    International Nuclear Information System (INIS)

    Noll, A.; Rusch, G.; Roecher, H.; Dreute, J.; Heinrich, W.

    1988-01-01

    Starting ten years ago we developed completely automatic scanning and measuring systems for nuclear track detectors. In this paper we describe some new developments. Our autofocus systems based on the contrast of the video picture and on a laser autofocus have been improved in speed and in reliability. Based on new algorithms, faster programs have been developed to scan for nuclear tracks in plastic detectors. Methods for separation of overlapping tracks have been improved. Interactive programs for track measurements have been developed which are very helpful for space bio-physics experiments. Finally new methods for track measurements in nuclear emulsions irradiated with a beam perpendicular to the detector surface are described in this paper. (author)

  11. Tag-n-Track system for situation awareness for MOUTs

    Science.gov (United States)

    Kumar, Rakesh; Aggarwal, Manoj; Germano, Thomas E.; Zhao, Tao; Fontana, Robert; Bushika, Martin

    2006-05-01

    In order to train war fighters for urban warfare, live exercises are held at various Military Operations on Urban Terrain (MOUT) facilities. Commanders need to have situation awareness (SA) of the entire mock battlefield, and also the individual actions of the various war fighters. The commanders must be able to provide instant feedback and play through different actions and 'what-if' scenarios with the war fighters. The war fighters in their turn should be able to review their actions and rehearse different maneuvers. In this paper, we describe the technologies behind a prototype training system, which tracks war fighters around an urban site using a combination of ultra-wideband (UWB) Radio Frequency Identification (RFID) and smart video based tracking. The system is able to: (1) Tag each individual with an unique ID using an RFID system, (2) Track and locate individuals within the domain of interest, (3) Associate IDs with visual appearance derived from live videos, (4) Visualize movement and actions of individuals within the context of a 3D model, and (5) Store and review activities with (x,y,ID) information associated with each individual. Dynamic acquisition and recording of the precise location of individual troops and units during training greatly aids the analysis of the training sessions allowing improved review, critique and instruction.

  12. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System

    OpenAIRE

    Fiona A. Desland; Aqeela Afzal; Zuha Warraich; J Mocco

    2014-01-01

    Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garc...

  13. Manifolds for pose tracking from monocular video

    Science.gov (United States)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  14. The Siegen automatic measuring system for track detectors: new developments

    International Nuclear Information System (INIS)

    Rusch, G.; Winkel, E.; Noll, A.; Heinrich, W.

    1991-01-01

    Starting twelve years ago we have developed completely automatic scanning and measuring systems for nuclear track detectors. The hardware and software of these systems have continuously been improved. They were used in different heavy ion and cosmic ray experiments. In this paper we describe methods for high resolution REL measurements in plastic nuclear track detectors and methods to scan and measure nuclear disintegration stars in AgCl detectors using an automatic measuring technique. The system uses a stepping motor driven microscope stage, a video camera and an image analysis computer based on a MC68020 microprocessor. (author)

  15. Multiple Drosophila Tracking System with Heading Direction

    Directory of Open Access Journals (Sweden)

    Pudith Sirigrivatanawong

    2017-01-01

    Full Text Available Machine vision systems have been widely used for image analysis, especially that which is beyond human ability. In biology, studies of behavior help scientists to understand the relationship between sensory stimuli and animal responses. This typically requires the analysis and quantification of animal locomotion. In our work, we focus on the analysis of the locomotion of the fruit fly D r o s o p h i l a m e l a n o g a s t e r , a widely used model organism in biological research. Our system consists of two components: fly detection and tracking. Our system provides the ability to extract a group of flies as the objects of concern and furthermore determines the heading direction of each fly. As each fly moves, the system states are refined with a Kalman filter to obtain the optimal estimation. For the tracking step, combining information such as position and heading direction with assignment algorithms gives a successful tracking result. The use of heading direction increases the system efficiency when dealing with identity loss and flies swapping situations. The system can also operate with a variety of videos with different light intensities.

  16. Application results for an augmented video tracker

    Science.gov (United States)

    Pierce, Bill

    1991-08-01

    The Relay Mirror Experiment (RME) is a research program to determine the pointing accuracy and stability levels achieved when a laser beam is reflected by the RME satellite from one ground station to another. This paper reports the results of using a video tracker augmented with a quad cell signal to improve the RME ground station tracking system performance. The video tracker controls a mirror to acquire the RME satellite, and provides a robust low bandwidth tracking loop to remove line of sight (LOS) jitter. The high-passed, high-gain quad cell signal is added to the low bandwidth, low-gain video tracker signal to increase the effective tracking loop bandwidth, and significantly improves LOS disturbance rejection. The quad cell augmented video tracking system is analyzed, and the math model for the tracker is developed. A MATLAB model is then developed from this, and performance as a function of bandwidth and disturbances is given. Improvements in performance due to the addition of the video tracker and the augmentation with the quad cell are provided. Actual satellite test results are then presented and compared with the simulated results.

  17. Multiple player tracking in sports video: a dual-mode two-way bayesian inference approach with progressive observation modeling.

    Science.gov (United States)

    Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong

    2011-06-01

    Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.

  18. A real time tracking vision system and its application to robotics

    International Nuclear Information System (INIS)

    Inoue, Hirochika

    1994-01-01

    Among various sensing channels the vision is most important for making robot intelligent. If provided with a high speed visual tracking capability, the robot-environment interaction becomes dynamic instead of static, and thus the potential repertoire of robot behavior becomes very rich. For this purpose we developed a real-time tracking vision system. The fundamental operation on which our system based is the calculation of correlation between local images. Use of special chip for correlation and the multi-processor configuration enable the robot to track more than hundreds cues in full video rate. In addition to the fundamental visual performance, applications for robot behavior control are also introduced. (author)

  19. Precise Object Tracking under Deformation

    International Nuclear Information System (INIS)

    Saad, M.H.

    2010-01-01

    The precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This framework focuses on the precise object tracking under deformation such as scaling, rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high precession results. xiiiThe precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This framework focuses on the precise object tracking under deformation such as scaling, rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high

  20. Optimized swimmer tracking system based on a novel multi-related-targets approach

    Science.gov (United States)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2017-02-01

    Robust tracking is a crucial step in automatic swimmer evaluation from video sequences. We designed a robust swimmer tracking system using a new multi-related-targets approach. The main idea is to consider the swimmer as a bloc of connected subtargets that advance at the same speed. If one of the subtargets is partially or totally occluded, it can be localized by knowing the position of the others. In this paper, we first introduce the two-dimensional direct linear transformation technique that we used to calibrate the videos. Then, we present the classical tracking approach based on dynamic fusion. Next, we highlight the main contribution of our work, which is the multi-related-targets tracking approach. This approach, the classical head-only approach and the ground truth are then compared, through testing on a database of high-level swimmers in training, national and international competitions (French National Championships, Limoges 2015, and World Championships, Kazan 2015). Tracking percentage and the accuracy of the instantaneous speed are evaluated and the findings show that our new appraoach is significantly more accurate than the classical approach.

  1. A System to Generate SignWriting for Video Tracks Enhancing Accessibility of Deaf People

    Directory of Open Access Journals (Sweden)

    Elena Verdú

    2017-12-01

    Full Text Available Video content has increased much on the Internet during last years. In spite of the efforts of different organizations and governments to increase the accessibility of websites, most multimedia content on the Internet is not accessible. This paper describes a system that contributes to make multimedia content more accessible on the Web, by automatically translating subtitles in oral language to SignWriting, a way of writing Sign Language. This system extends the functionality of a general web platform that can provide accessible web content for different needs. This platform has a core component that automatically converts any web page to a web page compliant with level AA of WAI guidelines. Around this core component, different adapters complete the conversion according to the needs of specific users. One adapter is the Deaf People Accessibility Adapter, which provides accessible web content for the Deaf, based on SignWritting. Functionality of this adapter has been extended with the video subtitle translator system. A first prototype of this system has been tested through different methods including usability and accessibility tests and results show that this tool can enhance the accessibility of video content available on the Web for Deaf people.

  2. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David

    2007-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  3. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David; Ebrahimi, Touradj

    2008-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  4. Video motion detection for physical security applications

    International Nuclear Information System (INIS)

    Matter, J.C.

    1990-01-01

    Physical security specialists have been attracted to the concept of video motion detection for several years. Claimed potential advantages included additional benefit from existing video surveillance systems, automatic detection, improved performance compared to human observers, and cost-effectiveness. In recent years, significant advances in image-processing dedicated hardware and image analysis algorithms and software have accelerated the successful application of video motion detection systems to a variety of physical security applications. Early video motion detectors (VMDs) were useful for interior applications of volumetric sensing. Success depended on having a relatively well-controlled environment. Attempts to use these systems outdoors frequently resulted in an unacceptable number of nuisance alarms. Currently, Sandia National Laboratories (SNL) is developing several advanced systems that employ image-processing techniques for a broader set of safeguards and security applications. The Target Cueing and Tracking System (TCATS), the Video Imaging System for Detection, Tracking, and Assessment (VISDTA), the Linear Infrared Scanning Array (LISA); the Mobile Intrusion Detection and Assessment System (MIDAS), and the Visual Artificially Intelligent Surveillance (VAIS) systems are described briefly

  5. OpenCV and TYZX : video surveillance for tracking.

    Energy Technology Data Exchange (ETDEWEB)

    He, Jim; Spencer, Andrew; Chu, Eric

    2008-08-01

    As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processing solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.

  6. OpenCV and TYZX : video surveillance for tracking

    International Nuclear Information System (INIS)

    He, Jim; Spencer, Andrew; Chu, Eric

    2008-01-01

    As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processing solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition

  7. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    Science.gov (United States)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  8. Development of a CCD based system called DIGITRACK for automatic track counting and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Molnar, J.; Somogyi, G.; Szilagyi, S.; Sepsy, K. (Magyar Tudomanyos Akademia, Debrecen. Atommag Kutato Intezete)

    1984-01-01

    We have developed, to the best of our knowledge, the first automatic track analysis system (DIGITRACK) in which the video signals are processed by a new type of video-receiver called charge-coupled device (CCD). The photosensitive semi-conductor device is a 2.5 cm long line imager of type Fairchild CCD 121HC which converts one row of the picture seen through a low magnification microscope into 1728 binary signals by a thresholding logic. The picture elements are analysed by a microcomputer equipped with two INTEL 8080 microprocessors and interfaced to a PDP-11/40 computer. The microcomputer also controls the motion of the stage of microscope. For pattern recognition and analysis a software procedure is developed which is able to differentiate between overlapping tracks and to determine the number, surface opening and x-y coordinates of the tracks occurring in a given detector area. The distribution of track densities and spot areas on the detector surface can be visualized on a graphic display. The DIGITRACK system has been tested for analysis of alpha-tracks registered in CR-39 and LR-115 detectors.

  9. Mouse short- and long-term locomotor activity analyzed by video tracking software.

    Science.gov (United States)

    York, Jason M; Blevins, Neil A; McNeil, Leslie K; Freund, Gregory G

    2013-06-20

    Locomotor activity (LMA) is a simple and easily performed measurement of behavior in mice and other rodents. Improvements in video tracking software (VTS) have allowed it to be coupled to LMA testing, dramatically improving specificity and sensitivity when compared to the line crossings method with manual scoring. In addition, VTS enables high-throughput experimentation. While similar to automated video tracking used for the open field test (OFT), LMA testing is unique in that it allows mice to remain in their home cage and does not utilize the anxiogenic stimulus of bright lighting during the active phase of the light-dark cycle. Traditionally, LMA has been used for short periods of time (mins), while longer movement studies (hrs-days) have often used implanted transmitters and biotelemetry. With the option of real-time tracking, long-, like short-term LMA testing, can now be conducted using videography. Long-term LMA testing requires a specialized, but easily constructed, cage so that food and water (which is usually positioned on the cage top) does not obstruct videography. Importantly, videography and VTS allows for the quantification of parameters, such as path of mouse movement, that are difficult or unfeasible to measure with line crossing and/or biotelemetry. In sum, LMA testing coupled to VTS affords a more complete description of mouse movement and the ability to examine locomotion over an extended period of time.

  10. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Science.gov (United States)

    Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua

    2014-01-01

    To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252

  11. Development of Automated Tracking System with Active Cameras for Figure Skating

    Science.gov (United States)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  12. Optimal path planning for video-guided smart munitions via multitarget tracking

    Science.gov (United States)

    Borkowski, Jeffrey M.; Vasquez, Juan R.

    2006-05-01

    An advent in the development of smart munitions entails autonomously modifying target selection during flight in order to maximize the value of the target being destroyed. A unique guidance law can be constructed that exploits both attribute and kinematic data obtained from an onboard video sensor. An optimal path planning algorithm has been developed with the goals of obstacle avoidance and maximizing the value of the target impacted by the munition. Target identification and classification provides a basis for target value which is used in conjunction with multi-target tracks to determine an optimal waypoint for the munition. A dynamically feasible trajectory is computed to provide constraints on the waypoint selection. Results demonstrate the ability of the autonomous system to avoid moving obstacles and revise target selection in flight.

  13. Analysis of Dead Time and Implementation of Smith Predictor Compensation in Tracking Servo Systems for Small Unmanned Aerial Vehicles

    National Research Council Canada - National Science Library

    Brashear , Jr, Thomas J

    2005-01-01

    .... Gimbaled video camera systems, designed at NPS, use two servo actuators to command line of sight orientation via serial controller while tracking a target and is termed Visual Based Target Tracking (VBTT...

  14. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Directory of Open Access Journals (Sweden)

    Ming Xue

    2014-02-01

    Full Text Available To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks.

  15. An integrated video- and weight-monitoring system for the surveillance of highly enriched uranium blend down operations

    International Nuclear Information System (INIS)

    Lenarduzzi, R.; Castleberry, K.; Whitaker, M.; Martinez, R.

    1998-01-01

    An integrated video-surveillance and weight-monitoring system has been designed and constructed for tracking the blending down of weapons-grade uranium by the US Department of Energy. The instrumentation is being used by the International Atomic Energy Agency in its task of tracking and verifying the blended material at the Portsmouth Gaseous Diffusion Plant, Portsmouth, Ohio. The weight instrumentation developed at the Oak Ridge National Laboratory monitors and records the weight of cylinders of the highly enriched uranium as their contents are fed into the blending facility while the video equipment provided by Sandia National Laboratory records periodic and event triggered images of the blending area. A secure data network between the scales, cameras, and computers insures data integrity and eliminates the possibility of tampering. The details of the weight monitoring instrumentation, video- and weight-system interaction, and the secure data network is discussed

  16. Robust online face tracking-by-detection

    NARCIS (Netherlands)

    Comaschi, F.; Stuijk, S.; Basten, T.; Corporaal, H.

    2016-01-01

    The problem of online face tracking from unconstrained videos is still unresolved. Challenges range from coping with severe online appearance variations to coping with occlusion. We propose RFTD (Robust Face Tracking-by-Detection), a system which combines tracking and detection into a single

  17. Hardware accelerator design for tracking in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  18. A Coupled Hidden Markov Random Field Model for Simultaneous Face Clustering and Tracking in Videos

    KAUST Repository

    Wu, Baoyuan; Hu, Bao-Gang; Ji, Qiang

    2016-01-01

    Face clustering and face tracking are two areas of active research in automatic facial video processing. They, however, have long been studied separately, despite the inherent link between them. In this paper, we propose to perform simultaneous face

  19. Detection of Visual Events in Underwater Video Using a Neuromorphic Saliency-based Attention System

    Science.gov (United States)

    Edgington, D. R.; Walther, D.; Cline, D. E.; Sherlock, R.; Salamy, K. A.; Wilson, A.; Koch, C.

    2003-12-01

    The Monterey Bay Aquarium Research Institute (MBARI) uses high-resolution video equipment on remotely operated vehicles (ROV) to obtain quantitative data on the distribution and abundance of oceanic animals. High-quality video data supplants the traditional approach of assessing the kinds and numbers of animals in the oceanic water column through towing collection nets behind ships. Tow nets are limited in spatial resolution, and often destroy abundant gelatinous animals resulting in species undersampling. Video camera-based quantitative video transects (QVT) are taken through the ocean midwater, from 50m to 4000m, and provide high-resolution data at the scale of the individual animals and their natural aggregation patterns. However, the current manual method of analyzing QVT video by trained scientists is labor intensive and poses a serious limitation to the amount of information that can be analyzed from ROV dives. Presented here is an automated system for detecting marine animals (events) visible in the videos. Automated detection is difficult due to the low contrast of many translucent animals and due to debris ("marine snow") cluttering the scene. Video frames are processed with an artificial intelligence attention selection algorithm that has proven a robust means of target detection in a variety of natural terrestrial scenes. The candidate locations identified by the attention selection module are tracked across video frames using linear Kalman filters. Typically, the occurrence of visible animals in the video footage is sparse in space and time. A notion of "boring" video frames is developed by detecting whether or not there is an interesting candidate object for an animal present in a particular sequence of underwater video -- video frames that do not contain any "interesting" events. If objects can be tracked successfully over several frames, they are stored as potentially "interesting" events. Based on low-level properties, interesting events are

  20. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    Energy Technology Data Exchange (ETDEWEB)

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano [Cavendish Laboratory, University of Cambridge, 19 J J Thomson Avenue, Cambridge CB3 0HE (United Kingdom)

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  1. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Science.gov (United States)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  2. Toward automating Hammersmith pulled-to-sit examination of infants using feature point based video object tracking.

    Science.gov (United States)

    Dogra, Debi P; Majumdar, Arun K; Sural, Shamik; Mukherjee, Jayanta; Mukherjee, Suchandra; Singh, Arun

    2012-01-01

    Hammersmith Infant Neurological Examination (HINE) is a set of tests used for grading neurological development of infants on a scale of 0 to 3. These tests help in assessing neurophysiological development of babies, especially preterm infants who are born before (the fetus reaches) the gestational age of 36 weeks. Such tests are often conducted in the follow-up clinics of hospitals for grading infants with suspected disabilities. Assessment based on HINE depends on the expertise of the physicians involved in conducting the examinations. It has been noted that some of these tests, especially pulled-to-sit and lateral tilting, are difficult to assess solely based on visual observation. For example, during the pulled-to-sit examination, the examiner needs to observe the relative movement of the head with respect to torso while pulling the infant by holding wrists. The examiner may find it difficult to follow the head movement from the coronal view. Video object tracking based automatic or semi-automatic analysis can be helpful in this case. In this paper, we present a video based method to automate the analysis of pulled-to-sit examination. In this context, a dynamic programming and node pruning based efficient video object tracking algorithm has been proposed. Pulled-to-sit event detection is handled by the proposed tracking algorithm that uses a 2-D geometric model of the scene. The algorithm has been tested with normal as well as marker based videos of the examination recorded at the neuro-development clinic of the SSKM Hospital, Kolkata, India. It is found that the proposed algorithm is capable of estimating the pulled-to-sit score with sensitivity (80%-92%) and specificity (89%-96%).

  3. Automated intelligent video surveillance system for ships

    Science.gov (United States)

    Wei, Hai; Nguyen, Hieu; Ramu, Prakash; Raju, Chaitanya; Liu, Xiaoqing; Yadegar, Jacob

    2009-05-01

    To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions, while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships (AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting, classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in- the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection.

  4. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  5. Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning.

    Science.gov (United States)

    Hong, Weizhe; Kennedy, Ann; Burgos-Artizzu, Xavier P; Zelikowsky, Moriel; Navonne, Santiago G; Perona, Pietro; Anderson, David J

    2015-09-22

    A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body "pose" of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics.

  6. Precise object tracking under deformation

    International Nuclear Information System (INIS)

    Saad, M.H

    2010-01-01

    The precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This frame-work focuses on the precise object tracking under deformation such as scaling , rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high precession results.

  7. Persistent Aerial Tracking

    KAUST Repository

    Mueller, Matthias

    2016-04-13

    In this thesis, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photo-realistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the rst evaluation of many state of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. We also present a simulator that can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV "in the field", as well as, generate synthetic but photo-realistic tracking datasets with free ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator will be made publicly available to the vision community to further research in the area of object tracking from UAVs. Additionally, we propose a persistent, robust and autonomous object tracking system for unmanned aerial vehicles (UAVs) called Persistent Aerial Tracking (PAT). A computer vision and control strategy is applied to a diverse set of moving objects (e.g. humans, animals, cars, boats, etc.) integrating multiple UAVs with a stabilized RGB camera. A novel strategy is employed to successfully track objects over a long period, by \\'handing over the camera\\' from one UAV to another. We integrate the complete system into an off-the-shelf UAV, and obtain promising results showing the robustness of our solution in real-world aerial scenarios.

  8. Development of SPIES (Space Intelligent Eyeing System) for smart vehicle tracing and tracking

    Science.gov (United States)

    Abdullah, Suzanah; Ariffin Osoman, Muhammad; Guan Liyong, Chua; Zulfadhli Mohd Noor, Mohd; Mohamed, Ikhwan

    2016-06-01

    SPIES or Space-based Intelligent Eyeing System is an intelligent technology which can be utilized for various applications such as gathering spatial information of features on Earth, tracking system for the movement of an object, tracing system to trace the history information, monitoring driving behavior, security and alarm system as an observer in real time and many more. SPIES as will be developed and supplied modularly will encourage the usage based on needs and affordability of users. SPIES are a complete system with camera, GSM, GPS/GNSS and G-Sensor modules with intelligent function and capabilities. Mainly the camera is used to capture pictures and video and sometimes with audio of an event. Its usage is not limited to normal use for nostalgic purpose but can be used as a reference for security and material of evidence when an undesirable event such as crime occurs. When integrated with space based technology of the Global Navigational Satellite System (GNSS), photos and videos can be recorded together with positioning information. A product of the integration of these technologies when integrated with Information, Communication and Technology (ICT) and Geographic Information System (GIS) will produce innovation in the form of information gathering methods in still picture or video with positioning information that can be conveyed in real time via the web to display location on the map hence creating an intelligent eyeing system based on space technology. The importance of providing global positioning information is a challenge but overcome by SPIES even in areas without GNSS signal reception for the purpose of continuous tracking and tracing capability

  9. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  10. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera...

  11. Automated Indexing and Search of Video Data in Large Collections with inVideo

    Directory of Open Access Journals (Sweden)

    Shuangbao Paul Wang

    2017-08-01

    Full Text Available In this paper, we present a novel system, inVideo, for automatically indexing and searching videos based on the keywords spoken in the audio track and the visual content of the video frames. Using the highly efficient video indexing engine we developed, inVideo is able to analyze videos using machine learning and pattern recognition without the need for initial viewing by a human. The time-stamped commenting and tagging features refine the accuracy of search results. The cloud-based implementation makes it possible to conduct elastic search, augmented search, and data analytics. Our research shows that inVideo presents an efficient tool in processing and analyzing videos and increasing interactions in video-based online learning environment. Data from a cybersecurity program with more than 500 students show that applying inVideo to current video material, interactions between student-student and student-faculty increased significantly across 24 sections program-wide.

  12. Solar tracking system

    Science.gov (United States)

    Okandan, Murat; Nielson, Gregory N.

    2016-07-12

    Solar tracking systems, as well as methods of using such solar tracking systems, are disclosed. More particularly, embodiments of the solar tracking systems include lateral supports horizontally positioned between uprights to support photovoltaic modules. The lateral supports may be raised and lowered along the uprights or translated to cause the photovoltaic modules to track the moving sun.

  13. Digital Watermark Tracking using Intelligent Multi-Agents System

    Directory of Open Access Journals (Sweden)

    Nagaraj V. DHARWADKAR

    2010-01-01

    Full Text Available E-commerce has become a huge business and adriving factor in the development of the Internet. Onlineshopping services are well established. Due to the evolution of2G and 3G mobile networks, soon online shopping services arecomplemented by their wireless counterparts. Furthermore, inthe recent years online delivery of digital media, such as MP3audio or video or image is very popular and will become anincreasingly important part of E-commerce. The advantage ofinternet is sharing the valuable digital data which lead to misuseof digital data. To resolve the problem of misuse of digital dataon Internet we need to have strong Digital rights monitoringsystem. Digital Rights Management (DRM is fairly youngdiscipline, while some of its underlying technologies have beenknown from many years. The use of DRM for managing andprotecting intellectual property rights is a comparatively newfield. In this paper we propose a model for online digital imagelibrary copyright protection based on watermark trackingSystem.In our proposed model the tracking of watermarks onremote host nodes is done using active mobile agents. The multiagentsystem architecture is used in watermark tracking whichsupports the coordination of several component tasks acrossdistributed and flexible networks of information sources.Whereas a centralized system is susceptible to system-widefailures and processing bottlenecks, multi-agent systems aremore reliable, especially given the likelihood of individualcomponent failures.

  14. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  15. Vision-Based System for Human Detection and Tracking in Indoor Environment

    OpenAIRE

    Benezeth , Yannick; Emile , Bruno; Laurent , Hélène; Rosenberger , Christophe

    2010-01-01

    International audience; In this paper, we propose a vision-based system for human detection and tracking in indoor environment using a static camera. The proposed method is based on object recognition in still images combined with methods using temporal information from the video. Doing that, we improve the performance of the overall system and reduce the task complexity. We first use background subtraction to limit the search space of the classifier. The segmentation is realized by modeling ...

  16. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    Science.gov (United States)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  17. Video Surveillance of Epilepsy Patients using Color Image Processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Alving, Jørgen

    2007-01-01

    This report introduces a method for tracking of patients under video surveillance based on a marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lightning issues and other movi...

  18. Video surveillance of epilepsy patients using color image processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Vilic, Adnan

    2014-01-01

    This paper introduces a method for tracking patients under video surveillance based on a color marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lighting issues and other mov...

  19. The live service of video geo-information

    Science.gov (United States)

    Xue, Wu; Zhang, Yongsheng; Yu, Ying; Zhao, Ling

    2016-03-01

    In disaster rescue, emergency response and other occasions, traditional aerial photogrammetry is difficult to meet real-time monitoring and dynamic tracking demands. To achieve the live service of video geo-information, a system is designed and realized—an unmanned helicopter equipped with video sensor, POS, and high-band radio. This paper briefly introduced the concept and design of the system. The workflow of video geo-information live service is listed. Related experiments and some products are shown. In the end, the conclusion and outlook is given.

  20. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    less vertical motion. The exceptions are videos from the classes of biking (mainly due to the camera tracking fast bikers), jumping on a trampoline ...tracking the bikers; the jumping videos, featuring people on trampolines , the swing videos, which are usually recorded in profile view, and the walking

  1. Unattended video surveillance systems for international safeguards

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1979-01-01

    The use of unattended video surveillance systems places some unique requirements on the systems and their hardware. The systems have the traditional requirements of video imaging, video storage, and video playback but also have some special requirements such as tamper safing. The technology available to meet these requirements and how it is being applied to unattended video surveillance systems are discussed in this paper

  2. Droplet morphometry and velocimetry (DMV): a video processing software for time-resolved, label-free tracking of droplet parameters.

    Science.gov (United States)

    Basu, Amar S

    2013-05-21

    Emerging assays in droplet microfluidics require the measurement of parameters such as drop size, velocity, trajectory, shape deformation, fluorescence intensity, and others. While micro particle image velocimetry (μPIV) and related techniques are suitable for measuring flow using tracer particles, no tool exists for tracking droplets at the granularity of a single entity. This paper presents droplet morphometry and velocimetry (DMV), a digital video processing software for time-resolved droplet analysis. Droplets are identified through a series of image processing steps which operate on transparent, translucent, fluorescent, or opaque droplets. The steps include background image generation, background subtraction, edge detection, small object removal, morphological close and fill, and shape discrimination. A frame correlation step then links droplets spanning multiple frames via a nearest neighbor search with user-defined matching criteria. Each step can be individually tuned for maximum compatibility. For each droplet found, DMV provides a time-history of 20 different parameters, including trajectory, velocity, area, dimensions, shape deformation, orientation, nearest neighbour spacing, and pixel statistics. The data can be reported via scatter plots, histograms, and tables at the granularity of individual droplets or by statistics accrued over the population. We present several case studies from industry and academic labs, including the measurement of 1) size distributions and flow perturbations in a drop generator, 2) size distributions and mixing rates in drop splitting/merging devices, 3) efficiency of single cell encapsulation devices, 4) position tracking in electrowetting operations, 5) chemical concentrations in a serial drop dilutor, 6) drop sorting efficiency of a tensiophoresis device, 7) plug length and orientation of nonspherical plugs in a serpentine channel, and 8) high throughput tracking of >250 drops in a reinjection system. Performance metrics

  3. A new laboratory radio frequency identification (RFID) system for behavioural tracking of marine organisms.

    Science.gov (United States)

    Aguzzi, Jacopo; Sbragaglia, Valerio; Sarriá, David; García, José Antonio; Costa, Corrado; del Río, Joaquín; Mànuel, Antoni; Menesatti, Paolo; Sardà, Francesc

    2011-01-01

    Radio frequency identification (RFID) devices are currently used to quantify several traits of animal behaviour with potential applications for the study of marine organisms. To date, behavioural studies with marine organisms are rare because of the technical difficulty of propagating radio waves within the saltwater medium. We present a novel RFID tracking system to study the burrowing behaviour of a valuable fishery resource, the Norway lobster (Nephrops norvegicus L.). The system consists of a network of six controllers, each handling a group of seven antennas. That network was placed below a microcosm tank that recreated important features typical of Nephrops' grounds, such as the presence of multiple burrows. The animals carried a passive transponder attached to their telson, operating at 13.56 MHz. The tracking system was implemented to concurrently report the behaviour of up to three individuals, in terms of their travelled distances in a specified unit of time and their preferential positioning within the antenna network. To do so, the controllers worked in parallel to send the antenna data to a computer via a USB connection. The tracking accuracy of the system was evaluated by concurrently recording the animals' behaviour with automated video imaging. During the two experiments, each lasting approximately one week, two different groups of three animals each showed a variable burrow occupancy and a nocturnal displacement under a standard photoperiod regime (12 h light:12 h dark), measured using the RFID method. Similar results were obtained with the video imaging. Our implemented RFID system was therefore capable of efficiently tracking the tested organisms and has a good potential for use on a wide variety of other marine organisms of commercial, aquaculture, and ecological interest.

  4. A System to Generate SignWriting for Video Tracks Enhancing Accessibility of Deaf People

    OpenAIRE

    Elena Verdú; Cristina Pelayo G-Bustelo; Ángeles Martínez Sánchez; Rubén Gonzalez-Crespo

    2017-01-01

    Video content has increased much on the Internet during last years. In spite of the efforts of different organizations and governments to increase the accessibility of websites, most multimedia content on the Internet is not accessible. This paper describes a system that contributes to make multimedia content more accessible on the Web, by automatically translating subtitles in oral language to Sign Writing, a way of writing Sign Language. This system extends the functionality of a general we...

  5. Video systems for alarm assessment

    International Nuclear Information System (INIS)

    Greenwoll, D.A.; Matter, J.C.; Ebel, P.E.

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs

  6. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  7. Person detection, tracking and following using stereo camera

    Science.gov (United States)

    Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping

    2018-04-01

    Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  8. Simultaneous recordings of human microsaccades and drifts with a contemporary video eye tracker and the search coil technique.

    Directory of Open Access Journals (Sweden)

    Michael B McCamy

    Full Text Available Human eyes move continuously, even during visual fixation. These "fixational eye movements" (FEMs include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs.

  9. Video stimuli reduce object-directed imitation accuracy: a novel two-person motion-tracking approach.

    Science.gov (United States)

    Reader, Arran T; Holmes, Nicholas P

    2015-01-01

    Imitation is an important form of social behavior, and research has aimed to discover and explain the neural and kinematic aspects of imitation. However, much of this research has featured single participants imitating in response to pre-recorded video stimuli. This is in spite of findings that show reduced neural activation to video vs. real life movement stimuli, particularly in the motor cortex. We investigated the degree to which video stimuli may affect the imitation process using a novel motion tracking paradigm with high spatial and temporal resolution. We recorded 14 positions on the hands, arms, and heads of two individuals in an imitation experiment. One individual freely moved within given parameters (moving balls across a series of pegs) and a second participant imitated. This task was performed with either simple (one ball) or complex (three balls) movement difficulty, and either face-to-face or via a live video projection. After an exploratory analysis, three dependent variables were chosen for examination: 3D grip position, joint angles in the arm, and grip aperture. A cross-correlation and multivariate analysis revealed that object-directed imitation task accuracy (as represented by grip position) was reduced in video compared to face-to-face feedback, and in complex compared to simple difficulty. This was most prevalent in the left-right and forward-back motions, relevant to the imitator sitting face-to-face with the actor or with a live projected video of the same actor. The results suggest that for tasks which require object-directed imitation, video stimuli may not be an ecologically valid way to present task materials. However, no similar effects were found in the joint angle and grip aperture variables, suggesting that there are limits to the influence of video stimuli on imitation. The implications of these results are discussed with regards to previous findings, and with suggestions for future experimentation.

  10. People detection in nuclear plants by video processing for safety purpose

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A., E-mail: calexandre@ien.gov.b, E-mail: mol@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN), Rio de Janeiro, RJ (Brazil); Seixas, Jose M.; Silva, Eduardo Antonio B., E-mail: seixas@lps.ufrj.b, E-mail: eduardo@lps.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Eletrica; Cota, Raphael E.; Ramos, Bruno L., E-mail: brunolange@poli.ufrj.b [Universidade Federal do Rio de Janeiro (EP/UFRJ), RJ (Brazil). Dept. de Engenharia Eletronica e de Computacao

    2011-07-01

    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta's room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work. (author)

  11. People detection in nuclear plants by video processing for safety purpose

    International Nuclear Information System (INIS)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Seixas, Jose M.; Silva, Eduardo Antonio B.; Cota, Raphael E.; Ramos, Bruno L.

    2011-01-01

    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta's room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work. (author)

  12. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  13. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    Heartbeat Rate (HR) reveals a person’s health condition. This paper presents an effective system for measuring HR from facial videos acquired in a more realistic environment than the testing environment of current systems. The proposed method utilizes a facial feature point tracking method...... by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  14. A microprocessor based picture analysis system for automatic track measurements

    International Nuclear Information System (INIS)

    Heinrich, W.; Trakowski, W.; Beer, J.; Schucht, R.

    1982-01-01

    In the last few years picture analysis became a powerful technique for measurements of nuclear tracks in plastic detectors. For this purpose rather expensive commercial systems are available. Two inexpensive microprocessor based systems with different resolution were developed. The video pictures of particles seen through a microscope are digitized in real time and the picture analysis is done by software. The microscopes are equipped with stages driven by stepping motors, which are controlled by separate microprocessors. A PDP 11/03 supervises the operation of all microprocessors and stores the measured data on its mass storage devices. (author)

  15. A New Laboratory Radio Frequency Identification (RFID) System for Behavioural Tracking of Marine Organisms

    Science.gov (United States)

    Aguzzi, Jacopo; Sbragaglia, Valerio; Sarriá, David; García, José Antonio; Costa, Corrado; del Río, Joaquín; Mànuel, Antoni; Menesatti, Paolo; Sardà, Francesc

    2011-01-01

    Radio frequency identification (RFID) devices are currently used to quantify several traits of animal behaviour with potential applications for the study of marine organisms. To date, behavioural studies with marine organisms are rare because of the technical difficulty of propagating radio waves within the saltwater medium. We present a novel RFID tracking system to study the burrowing behaviour of a valuable fishery resource, the Norway lobster (Nephrops norvegicus L.). The system consists of a network of six controllers, each handling a group of seven antennas. That network was placed below a microcosm tank that recreated important features typical of Nephrops’ grounds, such as the presence of multiple burrows. The animals carried a passive transponder attached to their telson, operating at 13.56 MHz. The tracking system was implemented to concurrently report the behaviour of up to three individuals, in terms of their travelled distances in a specified unit of time and their preferential positioning within the antenna network. To do so, the controllers worked in parallel to send the antenna data to a computer via a USB connection. The tracking accuracy of the system was evaluated by concurrently recording the animals’ behaviour with automated video imaging. During the two experiments, each lasting approximately one week, two different groups of three animals each showed a variable burrow occupancy and a nocturnal displacement under a standard photoperiod regime (12 h light:12 h dark), measured using the RFID method. Similar results were obtained with the video imaging. Our implemented RFID system was therefore capable of efficiently tracking the tested organisms and has a good potential for use on a wide variety of other marine organisms of commercial, aquaculture, and ecological interest. PMID:22163710

  16. A New Laboratory Radio Frequency Identification (RFID System for Behavioural Tracking of Marine Organisms

    Directory of Open Access Journals (Sweden)

    Francesc Sardà

    2011-10-01

    Full Text Available Radio frequency identification (RFID devices are currently used to quantify several traits of animal behaviour with potential applications for the study of marine organisms. To date, behavioural studies with marine organisms are rare because of the technical difficulty of propagating radio waves within the saltwater medium. We present a novel RFID tracking system to study the burrowing behaviour of a valuable fishery resource, the Norway lobster (Nephrops norvegicus L.. The system consists of a network of six controllers, each handling a group of seven antennas. That network was placed below a microcosm tank that recreated important features typical of Nephrops’ grounds, such as the presence of multiple burrows. The animals carried a passive transponder attached to their telson, operating at 13.56 MHz. The tracking system was implemented to concurrently report the behaviour of up to three individuals, in terms of their travelled distances in a specified unit of time and their preferential positioning within the antenna network. To do so, the controllers worked in parallel to send the antenna data to a computer via a USB connection. The tracking accuracy of the system was evaluated by concurrently recording the animals’ behaviour with automated video imaging. During the two experiments, each lasting approximately one week, two different groups of three animals each showed a variable burrow occupancy and a nocturnal displacement under a standard photoperiod regime (12 h light:12 h dark, measured using the RFID method. Similar results were obtained with the video imaging. Our implemented RFID system was therefore capable of efficiently tracking the tested organisms and has a good potential for use on a wide variety of other marine organisms of commercial, aquaculture, and ecological interest.

  17. Improved people detection in nuclear plants by video processing for safety purpose

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Carvalho, Paulo Victor R., E-mail: calexandre@ien.gov.br, E-mail: mol@ien.gov.br, E-mail: paulov@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Seixas, Jose M.; Silva, Eduardo Antonio B., E-mail: seixas@lps.ufrj.br, E-mail: eduardo@smt.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Eletrica; Waintraub, Fabio, E-mail: fabiowaintraub@hotmail.com [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Escola Politecnica. Departamento de Engenharia Eletronica e de Computacao

    2013-07-01

    This work describes improvements in a surveillance system for safety purposes in nuclear plants. The objective is to track people online in video, in order to estimate the dose received by personnel, during working tasks executed in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a nuclear research reactor, Argonauta. Cameras have been installed within Argonauta room, supplying the data needed. Video processing methods were combined for detecting and tracking people in video. More specifically, segmentation, performed by background subtraction, was combined with a tracking method based on color distribution. The use of both methods improved the overall results. An alternative approach was also evaluated, by means of blind source signal separation. Results are commented, along with perspectives. (author)

  18. Improved people detection in nuclear plants by video processing for safety purpose

    International Nuclear Information System (INIS)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A.; Carvalho, Paulo Victor R.; Seixas, Jose M.; Silva, Eduardo Antonio B.; Waintraub, Fabio

    2013-01-01

    This work describes improvements in a surveillance system for safety purposes in nuclear plants. The objective is to track people online in video, in order to estimate the dose received by personnel, during working tasks executed in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a nuclear research reactor, Argonauta. Cameras have been installed within Argonauta room, supplying the data needed. Video processing methods were combined for detecting and tracking people in video. More specifically, segmentation, performed by background subtraction, was combined with a tracking method based on color distribution. The use of both methods improved the overall results. An alternative approach was also evaluated, by means of blind source signal separation. Results are commented, along with perspectives. (author)

  19. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    Science.gov (United States)

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects.

  20. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  1. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    2000-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed, higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  2. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    1991-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  3. A Miniaturized Video System for Monitoring Drosophila Behavior

    Science.gov (United States)

    Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana

    2011-01-01

    populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies.

  4. Long-range eye tracking: A feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Jayaweera, S.K.; Lu, Shin-yee

    1994-08-24

    The design considerations for a long-range Purkinje effects based video tracking system using current technology is presented. Past work, current experiments, and future directions are thoroughly discussed, with an emphasis on digital signal processing techniques and obstacles. It has been determined that while a robust, efficient, long-range, and non-invasive eye tracking system will be difficult to develop, such as a project is indeed feasible.

  5. Hybrid markerless tracking of complex articulated motion in golf swings.

    Science.gov (United States)

    Fung, Sim Kwoh; Sundaraj, Kenneth; Ahamed, Nizam Uddin; Kiang, Lam Chee; Nadarajah, Sivadev; Sahayadhas, Arun; Ali, Md Asraf; Islam, Md Anamul; Palaniappan, Rajkumar

    2014-04-01

    Sports video tracking is a research topic that has attained increasing attention due to its high commercial potential. A number of sports, including tennis, soccer, gymnastics, running, golf, badminton and cricket have been utilised to display the novel ideas in sports motion tracking. The main challenge associated with this research concerns the extraction of a highly complex articulated motion from a video scene. Our research focuses on the development of a markerless human motion tracking system that tracks the major body parts of an athlete straight from a sports broadcast video. We proposed a hybrid tracking method, which consists of a combination of three algorithms (pyramidal Lucas-Kanade optical flow (LK), normalised correlation-based template matching and background subtraction), to track the golfer's head, body, hands, shoulders, knees and feet during a full swing. We then match, track and map the results onto a 2D articulated human stick model to represent the pose of the golfer over time. Our work was tested using two video broadcasts of a golfer, and we obtained satisfactory results. The current outcomes of this research can play an important role in enhancing the performance of a golfer, provide vital information to sports medicine practitioners by providing technically sound guidance on movements and should assist to diminish the risk of golfing injuries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    Science.gov (United States)

    Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

  7. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  8. New robust algorithm for tracking cells in videos of Drosophila morphogenesis based on finding an ideal path in segmented spatio-temporal cellular structures.

    Science.gov (United States)

    Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal

    2011-01-01

    In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.

  9. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  10. Automated interactive video playback for studies of animal communication.

    Science.gov (United States)

    Butkowski, Trisha; Yan, Wei; Gray, Aaron M; Cui, Rongfeng; Verzijden, Machteld N; Rosenthal, Gil G

    2011-02-09

    Video playback is a widely-used technique for the controlled manipulation and presentation of visual signals in animal communication. In particular, parameter-based computer animation offers the opportunity to independently manipulate any number of behavioral, morphological, or spectral characteristics in the context of realistic, moving images of animals on screen. A major limitation of conventional playback, however, is that the visual stimulus lacks the ability to interact with the live animal. Borrowing from video-game technology, we have created an automated, interactive system for video playback that controls animations in response to real-time signals from a video tracking system. We demonstrated this method by conducting mate-choice trials on female swordtail fish, Xiphophorus birchmanni. Females were given a simultaneous choice between a courting male conspecific and a courting male heterospecific (X. malinche) on opposite sides of an aquarium. The virtual male stimulus was programmed to track the horizontal position of the female, as courting males do in the wild. Mate-choice trials on wild-caught X. birchmanni females were used to validate the prototype's ability to effectively generate a realistic visual stimulus.

  11. Code domain steganography in video tracks

    Science.gov (United States)

    Rymaszewski, Sławomir

    2008-01-01

    This article is dealing with a practical method of hiding secret information in video stream. Method is dedicated for MPEG-2 stream. The algorithm takes to consider not only MPEG video coding scheme described in standard but also bits PES-packets encapsulation in MPEG-2 Program Stream (PS). This modification give higher capacity and more effective bit rate control for output stream than previously proposed methods.

  12. Real-time video analysis for retail stores

    Science.gov (United States)

    Hassan, Ehtesham; Maurya, Avinash K.

    2015-03-01

    With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.

  13. A tracking system for laboratory mice to support medical researchers in behavioral analysis.

    Science.gov (United States)

    Macrì, S; Mainetti, L; Patrono, L; Pieretti, S; Secco, A; Sergi, I

    2015-08-01

    The behavioral analysis of laboratory mice plays a key role in several medical and scientific research areas, such as biology, toxicology, pharmacology, and so on. Important information on mice behavior and their reaction to a particular stimulus is deduced from a careful analysis of their movements. Moreover, behavioral analysis of genetically modified mice allows obtaining important information about particular genes, phenotypes or drug effects. The techniques commonly adopted to support such analysis have many limitations, which make the related systems particularly ineffective. Currently, the engineering community is working to explore innovative identification and sensing technologies to develop new tracking systems able to guarantee benefits to animals' behavior analysis. This work presents a tracking solution based on passive Radio Frequency Identification Technology (RFID) in Ultra High Frequency (UHF) band. Much emphasis is given to the software component of the system, based on a Web-oriented solution, able to process the raw tracking data coming from a hardware system, and offer 2D and 3D tracking information as well as reports and dashboards about mice behavior. The system has been widely tested using laboratory mice and compared with an automated video-tracking software (i.e., EthoVision). The obtained results have demonstrated the effectiveness and reliability of the proposed solution, which is able to correctly detect the events occurring in the animals' cage, and to offer a complete and user-friendly tool to support researchers in behavioral analysis of laboratory mice.

  14. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  15. Automatic tracking of cells for video microscopy in patch clamp experiments.

    Science.gov (United States)

    Peixoto, Helton M; Munguba, Hermany; Cruz, Rossana M S; Guerreiro, Ana M G; Leao, Richardson N

    2014-06-20

    Visualisation of neurons labeled with fluorescent proteins or compounds generally require exposure to intense light for a relatively long period of time, often leading to bleaching of the fluorescent probe and photodamage of the tissue. Here we created a technique to drastically shorten light exposure and improve the targeting of fluorescent labeled cells that is specially useful for patch-clamp recordings. We applied image tracking and mask overlay to reduce the time of fluorescence exposure and minimise mistakes when identifying neurons. Neurons are first identified according to visual criteria (e.g. fluorescence protein expression, shape, viability etc.) and a transmission microscopy image Differential Interference Contrast (DIC) or Dodt contrast containing the cell used as a reference for the tracking algorithm. A fluorescence image can also be acquired later to be used as a mask (that can be overlaid on the target during live transmission video). As patch-clamp experiments require translating the microscope stage, we used pattern matching to track reference neurons in order to move the fluorescence mask to match the new position of the objective in relation to the sample. For the image processing we used the Open Source Computer Vision (OpenCV) library, including the Speeded-Up Robust Features (SURF) for tracking cells. The dataset of images (n = 720) was analyzed under normal conditions of acquisition and with influence of noise (defocusing and brightness). We validated the method in dissociated neuronal cultures and fresh brain slices expressing Enhanced Yellow Fluorescent Protein (eYFP) or Tandem Dimer Tomato (tdTomato) proteins, which considerably decreased the exposure to fluorescence excitation, thereby minimising photodamage. We also show that the neuron tracking can be used in differential interference contrast or Dodt contrast microscopy. The techniques of digital image processing used in this work are an important addition to the set of microscopy

  16. Cobra: A content-based video retrieval system

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, W.; Jensen, C.S.; Jeffery, K.G.; Pokorny, J.; Saltenis, S.; Bertino, E.; Böhm, K.; Jarke, M.

    2002-01-01

    An increasing number of large publicly available video libraries results in a demand for techniques that can manipulate the video data based on content. In this paper, we present a content-based video retrieval system called Cobra. The system supports automatic extraction and retrieval of high-level

  17. Detection of goal events in soccer videos

    Science.gov (United States)

    Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas

    2005-01-01

    In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.

  18. Development and application of traffic flow information collecting and analysis system based on multi-type video

    Science.gov (United States)

    Lu, Mujie; Shang, Wenjie; Ji, Xinkai; Hua, Mingzhuang; Cheng, Kuo

    2015-12-01

    Nowadays, intelligent transportation system (ITS) has already become the new direction of transportation development. Traffic data, as a fundamental part of intelligent transportation system, is having a more and more crucial status. In recent years, video observation technology has been widely used in the field of traffic information collecting. Traffic flow information contained in video data has many advantages which is comprehensive and can be stored for a long time, but there are still many problems, such as low precision and high cost in the process of collecting information. This paper aiming at these problems, proposes a kind of traffic target detection method with broad applicability. Based on three different ways of getting video data, such as aerial photography, fixed camera and handheld camera, we develop a kind of intelligent analysis software which can be used to extract the macroscopic, microscopic traffic flow information in the video, and the information can be used for traffic analysis and transportation planning. For road intersections, the system uses frame difference method to extract traffic information, for freeway sections, the system uses optical flow method to track the vehicles. The system was applied in Nanjing, Jiangsu province, and the application shows that the system for extracting different types of traffic flow information has a high accuracy, it can meet the needs of traffic engineering observations and has a good application prospect.

  19. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

    Science.gov (United States)

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421

  20. Semantic-based surveillance video retrieval.

    Science.gov (United States)

    Hu, Weiming; Xie, Dan; Fu, Zhouyu; Zeng, Wenrong; Maybank, Steve

    2007-04-01

    Visual surveillance produces large amounts of video data. Effective indexing and retrieval from surveillance video databases are very important. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval. In this paper, a semantic-based video retrieval framework for visual surveillance is proposed. A cluster-based tracking algorithm is developed to acquire motion trajectories. The trajectories are then clustered hierarchically using the spatial and temporal information, to learn activity models. A hierarchical structure of semantic indexing and retrieval of object activities, where each individual activity automatically inherits all the semantic descriptions of the activity model to which it belongs, is proposed for accessing video clips and individual objects at the semantic level. The proposed retrieval framework supports various queries including queries by keywords, multiple object queries, and queries by sketch. For multiple object queries, succession and simultaneity restrictions, together with depth and breadth first orders, are considered. For sketch-based queries, a method for matching trajectories drawn by users to spatial trajectories is proposed. The effectiveness and efficiency of our framework are tested in a crowded traffic scene.

  1. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  2. Detection of patient movement during CBCT examination using video observation compared with an accelerometer-gyroscope tracking system.

    Science.gov (United States)

    Spin-Neto, Rubens; Matzen, Louise H; Schropp, Lars; Gotfredsen, Erik; Wenzel, Ann

    2017-02-01

    To compare video observation (VO) with a novel three-dimensional registration method, based on an accelerometer-gyroscope (AG) system, to detect patient movement during CBCT examination. The movements were further analyzed according to complexity and patient age. In 181 patients (118 females/63 males; age average 30 years, range: 9-84 years), 206 CBCT examinations were performed, which were video-recorded during examination. An AG was, at the same time, attached to the patient head to track head position in three dimensions. Three observers scored patient movement (yes/no) by VO. AG provided movement data on the x-, y- and z-axes. Thresholds for AG-based registration were defined at 0.5, 1, 2, 3 and 4 mm (movement distance). Movement detected by VO was compared with that registered by AG, according to movement complexity (uniplanar vs multiplanar, as defined by AG) and patient age (≤15, 16-30 and ≥31 years). According to AG, movement ≥0.5 mm was present in 160 (77.7%) examinations. According to VO, movement was present in 46 (22.3%) examinations. One VO-detected movement was not registered by AG. Overall, VO did not detect 71.9% of the movements registered by AG at the 0.5-mm threshold. At a movement distance ≥4 mm, 20% of the AG-registered movements were not detected by VO. Multiplanar movements such as lateral head rotation (72.1%) and nodding/swallowing (52.6%) were more often detected by VO in comparison with uniplanar movements, such as head lifting (33.6%) and anteroposterior translation (35.6%), at the 0.5-mm threshold. The prevalence of patients who move was highest in patients younger than 16 years (64.3% for VO and 92.3% for AG-based registration at the 0.5-mm threshold). AG-based movement registration resulted in a higher prevalence of patient movement during CBCT examination than VO-based registration. Also, AG-registered multiplanar movements were more frequently detected by VO than uniplanar movements. The prevalence of patients who move

  3. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  4. Statistical motion vector analysis for object tracking in compressed video streams

    Science.gov (United States)

    Leny, Marc; Prêteux, Françoise; Nicholson, Didier

    2008-02-01

    Compressed video is the digital raw material provided by video-surveillance systems and used for archiving and indexing purposes. Multimedia standards have therefore a direct impact on such systems. If MPEG-2 used to be the coding standard, MPEG-4 (part 2) has now replaced it in most installations, and MPEG-4 AVC/H.264 solutions are now being released. Finely analysing the complex and rich MPEG-4 streams is a challenging issue addressed in that paper. The system we designed is based on five modules: low-resolution decoder, motion estimation generator, object motion filtering, low-resolution object segmentation, and cooperative decision. Our contributions refer to as the statistical analysis of the spatial distribution of the motion vectors, the computation of DCT-based confidence maps, the automatic motion activity detection in the compressed file and a rough indexation by dedicated descriptors. The robustness and accuracy of the system are evaluated on a large corpus (hundreds of hours of in-and outdoor videos with pedestrians and vehicles). The objective benchmarking of the performances is achieved with respect to five metrics allowing to estimate the error part due to each module and for different implementations. This evaluation establishes that our system analyses up to 200 frames (720x288) per second (2.66 GHz CPU).

  5. 78 FR 11988 - Open Video Systems

    Science.gov (United States)

    2013-02-21

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 76 [CS Docket No. 96-46, FCC 96-334] Open Video Systems AGENCY: Federal Communications Commission. ACTION: Final rule; announcement of effective date... 43160, August 21, 1996. The final rules modified rules and policies concerning Open Video Systems. DATES...

  6. Thermal Tracking of Sports Players

    Directory of Open Access Journals (Sweden)

    Rikke Gade

    2014-07-01

    Full Text Available We present here a real-time tracking algorithm for thermal video from a sports game. Robust detection of people includes routines for handling occlusions and noise before tracking each detected person with a Kalman filter. This online tracking algorithm is compared with a state-of-the-art offline multi-target tracking algorithm. Experiments are performed on a manually annotated 2-minutes video sequence of a real soccer game. The Kalman filter shows a very promising result on this rather challenging sequence with a tracking accuracy above 70% and is superior compared with the offline tracking approach. Furthermore, the combined detection and tracking algorithm runs in real time at 33 fps, even with large image sizes of 1920 × 480 pixels.

  7. Thermal tracking of sports players.

    Science.gov (United States)

    Gade, Rikke; Moeslund, Thomas B

    2014-07-29

    We present here a real-time tracking algorithm for thermal video from a sports game. Robust detection of people includes routines for handling occlusions and noise before tracking each detected person with a Kalman filter. This online tracking algorithm is compared with a state-of-the-art offline multi-target tracking algorithm. Experiments are performed on a manually annotated 2-minutes video sequence of a real soccer game. The Kalman filter shows a very promising result on this rather challenging sequence with a tracking accuracy above 70% and is superior compared with the offline tracking approach. Furthermore, the combined detection and tracking algorithm runs in real time at 33 fps, even with large image sizes of 1920 × 480 pixels.

  8. Novel approach to automatically classify rat social behavior using a video tracking system.

    Science.gov (United States)

    Peters, Suzanne M; Pinter, Ilona J; Pothuizen, Helen H J; de Heer, Raymond C; van der Harst, Johanneke E; Spruijt, Berry M

    2016-08-01

    In the past, studies in behavioral neuroscience and drug development have relied on simple and quick readout parameters of animal behavior to assess treatment efficacy or to understand underlying brain mechanisms. The predominant use of classical behavioral tests has been repeatedly criticized during the last decades because of their poor reproducibility, poor translational value and the limited explanatory power in functional terms. We present a new method to monitor social behavior of rats using automated video tracking. The velocity of moving and the distance between two rats were plotted in frequency distributions. In addition, behavior was manually annotated and related to the automatically obtained parameters for a validated interpretation. Inter-individual distance in combination with velocity of movement provided specific behavioral classes, such as moving with high velocity when "in contact" or "in proximity". Human observations showed that these classes coincide with following (chasing) behavior. In addition, when animals are "in contact", but at low velocity, behaviors such as allogrooming and social investigation were observed. Also, low dose treatment with morphine and short isolation increased the time animals spent in contact or in proximity at high velocity. Current methods that involve the investigation of social rat behavior are mostly limited to short and relatively simple manual observations. A new and automated method for analyzing social behavior in a social interaction test is presented here and shows to be sensitive to drug treatment and housing conditions known to influence social behavior in rats. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Renewable Energy Tracking Systems

    Science.gov (United States)

    Renewable energy generation ownership can be accounted through tracking systems. Tracking systems are highly automated, contain specific information about each MWh, and are accessible over the internet to market participants.

  10. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  11. Adaptive Kalman Filter Applied to Vision Based Head Gesture Tracking for Playing Video Games

    Directory of Open Access Journals (Sweden)

    Mohammadreza Asghari Oskoei

    2017-11-01

    Full Text Available This paper proposes an adaptive Kalman filter (AKF to improve the performance of a vision-based human machine interface (HMI applied to a video game. The HMI identifies head gestures and decodes them into corresponding commands. Face detection and feature tracking algorithms are used to detect optical flow produced by head gestures. Such approaches often fail due to changes in head posture, occlusion and varying illumination. The adaptive Kalman filter is applied to estimate motion information and reduce the effect of missing frames in a real-time application. Failure in head gesture tracking eventually leads to malfunctioning game control, reducing the scores achieved, so the performance of the proposed vision-based HMI is examined using a game scoring mechanism. The experimental results show that the proposed interface has a good response time, and the adaptive Kalman filter improves the game scores by ten percent.

  12. Application of Video Recognition Technology in Landslide Monitoring System

    Directory of Open Access Journals (Sweden)

    Qingjia Meng

    2018-01-01

    Full Text Available The video recognition technology is applied to the landslide emergency remote monitoring system. The trajectories of the landslide are identified by this system in this paper. The system of geological disaster monitoring is applied synthetically to realize the analysis of landslide monitoring data and the combination of video recognition technology. Landslide video monitoring system will video image information, time point, network signal strength, power supply through the 4G network transmission to the server. The data is comprehensively analysed though the remote man-machine interface to conduct to achieve the threshold or manual control to determine the front-end video surveillance system. The system is used to identify the target landslide video for intelligent identification. The algorithm is embedded in the intelligent analysis module, and the video frame is identified, detected, analysed, filtered, and morphological treatment. The algorithm based on artificial intelligence and pattern recognition is used to mark the target landslide in the video screen and confirm whether the landslide is normal. The landslide video monitoring system realizes the remote monitoring and control of the mobile side, and provides a quick and easy monitoring technology.

  13. Pennsylvania Source Term Tracking System

    International Nuclear Information System (INIS)

    1992-08-01

    The Pennsylvania Source Term Tracking System tabulates surveys received from radioactive waste generators in the Commonwealth of radioactive waste is collected each quarter from generators using the Low-Level Radioactive Waste Management Quarterly Report Form (hereafter called the survey) and then entered into the tracking system data base. This personal computer-based tracking system can generate 12 types of tracking reports. The first four sections of this reference manual supply complete instructions for installing and setting up the tracking system on a PC. Section 5 presents instructions for entering quarterly survey data, and Section 6 discusses generating reports. The appendix includes samples of each report

  14. Super-resolution imaging applied to moving object tracking

    Science.gov (United States)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  15. Reduced bandwidth video for remote vehicle operations

    Energy Technology Data Exchange (ETDEWEB)

    Noell, T.E.; DePiero, F.W.

    1993-08-01

    Oak Ridge National Laboratory staff have developed a video compression system for low-bandwidth remote operations. The objective is to provide real-time video at data rates comparable to available tactical radio links, typically 16 to 64 thousand bits per second (kbps), while maintaining sufficient quality to achieve mission objectives. The system supports both continuous lossy transmission of black and white (gray scale) video for remote driving and progressive lossless transmission of black and white images for remote automatic target acquisition. The average data rate of the resulting bit stream is 64 kbps. This system has been demonstrated to provide video of sufficient quality to allow remote driving of a High-Mobility Multipurpose Wheeled Vehicle at speeds up to 15 mph (24.1 kph) on a moguled dirt track. The nominal driving configuration provides a frame rate of 4 Hz, a compression per frame of 125:1, and a resulting latency of {approximately}1s. This paper reviews the system approach and implementation, and further describes some of our experiences when using the system to support remote driving.

  16. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  17. Dedicated data recording video system for Spacelab experiments

    Science.gov (United States)

    Fukuda, Toshiyuki; Tanaka, Shoji; Fujiwara, Shinji; Onozuka, Kuniharu

    1984-04-01

    A feasibility study of video tape recorder (VTR) modification to add the capability of data recording etc. was conducted. This system is an on-broad system to support Spacelab experiments as a dedicated video system and a dedicated data recording system to operate independently of the normal operation of the Orbiter, Spacelab and the other experiments. It continuously records the video image signals with the acquired data, status and operator's voice at the same time on one cassette video tape. Such things, the crews' actions, animals' behavior, microscopic views and melting materials in furnace, etc. are recorded. So, it is expected that experimenters can make a very easy and convenient analysis of the synchronized video, voice and data signals in their post flight analysis.

  18. Urbanism on Track : Application of tracking technologies in urbanism

    NARCIS (Netherlands)

    Van der Hoeven, F.D.; Van Schaick, J.; Van der Spek, S.C.; Smit, M.G.J.

    2008-01-01

    Tracking technologies such as GPS, mobile phone tracking, video and RFID monitoring are rapidly becoming part of daily life. Technological progress offers huge possibilities for studying human activity patterns in time and space in new ways. Delft University of Technology (TU Delft) held an

  19. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    Directory of Open Access Journals (Sweden)

    Michiaki Inoue

    2017-10-01

    Full Text Available This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time.

  20. Video tracking and post-mortem analysis of dust particles from all tungsten ASDEX Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Endstrasser, N., E-mail: Nikolaus.Endstrasser@ipp.mpg.de [Max-Planck-Insitut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Brochard, F. [Institut Jean Lamour, Nancy-Universite, Bvd. des Aiguillettes, F-54506 Vandoeuvre (France); Rohde, V., E-mail: Volker.Rohde@ipp.mpg.de [Max-Planck-Insitut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Balden, M. [Max-Planck-Insitut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Lunt, T.; Bardin, S.; Briancon, J.-L. [Institut Jean Lamour, Nancy-Universite, Bvd. des Aiguillettes, F-54506 Vandoeuvre (France); Neu, R. [Max-Planck-Insitut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2011-08-01

    2D dust particle trajectories are extracted from fast framing camera videos of ASDEX Upgrade (AUG) by a new time- and resource-efficient code and classified into stationary hot spots, single-frame events and real dust particle fly-bys. Using hybrid global and local intensity thresholding and linear trajectory extrapolation individual particles could be tracked up to 80 ms. Even under challenging conditions such as high particle density and strong vacuum vessel illumination all particles detected for more than 50 frames are tracked correctly. During campaign 2009 dust has been trapped on five silicon wafer dust collectors strategically positioned within the vacuum vessel of the full tungsten AUG. Characterisation of the outer morphology and determination of the elemental composition of 5 x 10{sup 4} particles were performed via automated SEM-EDX analysis. A dust classification scheme based on these parameters was defined with the goal to link the particles to their most probable production sites.

  1. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System.

    Science.gov (United States)

    Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J

    2014-01-01

    Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.

  2. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System

    Directory of Open Access Journals (Sweden)

    Fiona A. Desland

    2014-01-01

    Full Text Available Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.

  3. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Kumar, R.

    2002-01-01

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  4. An integrated circuit/packet switched video conferencing system

    Energy Technology Data Exchange (ETDEWEB)

    Kippenhan Junior, H.A.; Lidinsky, W.P.; Roediger, G.A. [Fermi National Accelerator Lab., Batavia, IL (United States). HEP Network Resource Center; Waits, T.A. [Rutgers Univ., Piscataway, NJ (United States). Dept. of Physics and Astronomy

    1996-07-01

    The HEP Network Resource Center (HEPNRC) at Fermilab and the Collider Detector Facility (CDF) collaboration have evolved a flexible, cost-effective, widely accessible video conferencing system for use by high energy physics collaborations and others wishing to use video conferencing. No current systems seemed to fully meet the needs of high energy physics collaborations. However, two classes of video conferencing technology: circuit-switched and packet-switched, if integrated, might encompass most of HEPS's needs. It was also realized that, even with this integration, some additional functions were needed and some of the existing functions were not always wanted. HEPNRC with the help of members of the CDF collaboration set out to develop such an integrated system using as many existing subsystems and components as possible. This system is called VUPAC (Video conferencing Using Packets and Circuits). This paper begins with brief descriptions of the circuit-switched and packet-switched video conferencing systems. Following this, issues and limitations of these systems are considered. Next the VUPAC system is described. Integration is accomplished primarily by a circuit/packet video conferencing interface. Augmentation is centered in another subsystem called MSB (Multiport MultiSession Bridge). Finally, there is a discussion of the future work needed in the evolution of this system. (author)

  5. An integrated circuit/packet switched video conferencing system

    International Nuclear Information System (INIS)

    Kippenhan Junior, H.A.; Lidinsky, W.P.; Roediger, G.A.; Waits, T.A.

    1996-01-01

    The HEP Network Resource Center (HEPNRC) at Fermilab and the Collider Detector Facility (CDF) collaboration have evolved a flexible, cost-effective, widely accessible video conferencing system for use by high energy physics collaborations and others wishing to use video conferencing. No current systems seemed to fully meet the needs of high energy physics collaborations. However, two classes of video conferencing technology: circuit-switched and packet-switched, if integrated, might encompass most of HEPS's needs. It was also realized that, even with this integration, some additional functions were needed and some of the existing functions were not always wanted. HEPNRC with the help of members of the CDF collaboration set out to develop such an integrated system using as many existing subsystems and components as possible. This system is called VUPAC (Video conferencing Using Packets and Circuits). This paper begins with brief descriptions of the circuit-switched and packet-switched video conferencing systems. Following this, issues and limitations of these systems are considered. Next the VUPAC system is described. Integration is accomplished primarily by a circuit/packet video conferencing interface. Augmentation is centered in another subsystem called MSB (Multiport MultiSession Bridge). Finally, there is a discussion of the future work needed in the evolution of this system. (author)

  6. Visual Analytics and Storytelling through Video

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak C.; Perrine, Kenneth A.; Mackey, Patrick S.; Foote, Harlan P.; Thomas, Jim

    2005-10-31

    This paper supplements a video clip submitted to the Video Track of IEEE Symposium on Information Visualization 2005. The original video submission applies a two-way storytelling approach to demonstrate the visual analytics capabilities of a new visualization technique. The paper presents our video production philosophy, describes the plot of the video, explains the rationale behind the plot, and finally, shares our production experiences with our readers.

  7. Optical eye tracking system for real-time noninvasive tumor localization in external beam radiotherapy.

    Science.gov (United States)

    Via, Riccardo; Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Riboldi, Marco; Ciocca, Mario; Orecchia, Roberto; Baroni, Guido

    2015-05-01

    External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The

  8. Modular integrated video system (MIVS) review station

    International Nuclear Information System (INIS)

    Garcia, M.L.

    1988-01-01

    An unattended video surveillance unit, the Modular Integrated Video System (MIVS), has been developed by Sandia National Laboratories for International Safeguards use. An important support element of this system is a semi-automatic Review Station. Four component modules, including an 8 mm video tape recorder, a 4-inch video monitor, a power supply and control electronics utilizing a liquid crystal display (LCD) are mounted in a suitcase for probability. The unit communicates through the interactive, menu-driven LCD and may be operated on facility power through the world. During surveillance, the MIVS records video information at specified time intervals, while also inserting consecutive scene numbers and tamper event information. Using either of two available modes of operation, the Review Station reads the inserted information and counts the number of missed scenes and/or tamper events encountered on the tapes, and reports this to the user on the LCD. At the end of a review session, the system will summarize the results of the review, stop the recorder, and advise the user of the completion of the review. In addition, the Review Station will check for any video loss on the tape

  9. Moving Target Detection and Active Tracking with a Multicamera Network

    Directory of Open Access Journals (Sweden)

    Long Zhao

    2014-01-01

    Full Text Available We propose a systematic framework for Intelligence Video Surveillance System (IVSS with a multicamera network. The proposed framework consists of low-cost static and PTZ cameras, target detection and tracking algorithms, and a low-cost PTZ camera feedback control algorithm based on target information. The target detection and tracking is realized by fixed cameras using a moving target detection and tracking algorithm; the PTZ camera is manoeuvred to actively track the target from the tracking results of the static camera. The experiments are carried out using practical surveillance system data, and the experimental results show that the systematic framework and algorithms presented in this paper are efficient.

  10. Encrypted IP video communication system

    Science.gov (United States)

    Bogdan, Apetrechioaie; Luminiţa, Mateescu

    2010-11-01

    Digital video transmission is a permanent subject of development, research and improvement. This field of research has an exponentially growing market in civil, surveillance, security and military aplications. A lot of solutions: FPGA, ASIC, DSP have been used for this purpose. The paper presents the implementation of an encrypted, IP based, video communication system having a competitive performance/cost ratio .

  11. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  12. Tracking a "facer's" behavior in a public plaza

    DEFF Research Database (Denmark)

    2014-01-01

    The video shows the tracking of a "facer's" behavior in a public plaza using a thermal camera (non-privacy violating) and a visualization of the tracks in a space-time cube in a 3D GIS. The tracking data is used in my PhD project on Human Movement Patterns in Smart Cities. The recording and analy...... and analysis of the thermal video has been made in collaboration with Rikke Gade from the Visual Analytics of People Lab at Aalborg University.......The video shows the tracking of a "facer's" behavior in a public plaza using a thermal camera (non-privacy violating) and a visualization of the tracks in a space-time cube in a 3D GIS. The tracking data is used in my PhD project on Human Movement Patterns in Smart Cities. The recording...

  13. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  14. Video-based Mobile Mapping System Using Smartphones

    Science.gov (United States)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  15. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  16. Video game training and the reward system

    OpenAIRE

    Lorenz, R.; Gleich, T.; Gallinat, J.; Kühn, S.

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors ...

  17. A review of techniques for the identification and measurement of fish in underwater stereo-video image sequences

    Science.gov (United States)

    Shortis, Mark R.; Ravanbakskh, Mehdi; Shaifat, Faisal; Harvey, Euan S.; Mian, Ajmal; Seager, James W.; Culverhouse, Philip F.; Cline, Danelle E.; Edgington, Duane R.

    2013-04-01

    Underwater stereo-video measurement systems are used widely for counting and measuring fish in aquaculture, fisheries and conservation management. To determine population counts, spatial or temporal frequencies, and age or weight distributions, snout to fork length measurements are captured from the video sequences, most commonly using a point and click process by a human operator. Current research aims to automate the measurement and counting task in order to improve the efficiency of the process and expand the use of stereo-video systems within marine science. A fully automated process will require the detection and identification of candidates for measurement, followed by the snout to fork length measurement, as well as the counting and tracking of fish. This paper presents a review of the techniques used for the detection, identification, measurement, counting and tracking of fish in underwater stereo-video image sequences, including consideration of the changing body shape. The review will analyse the most commonly used approaches, leading to an evaluation of the techniques most likely to be a general solution to the complete process of detection, identification, measurement, counting and tracking.

  18. The effect of action video game playing on sensorimotor learning: Evidence from a movement tracking task.

    Science.gov (United States)

    Gozli, Davood G; Bavelier, Daphne; Pratt, Jay

    2014-10-12

    Research on the impact of action video game playing has revealed performance advantages on a wide range of perceptual and cognitive tasks. It is not known, however, if playing such games confers similar advantages in sensorimotor learning. To address this issue, the present study used a manual motion-tracking task that allowed for a sensitive measure of both accuracy and improvement over time. When the target motion pattern was consistent over trials, gamers improved with a faster rate and eventually outperformed non-gamers. Performance between the two groups, however, did not differ initially. When the target motion was inconsistent, changing on every trial, results revealed no difference between gamers and non-gamers. Together, our findings suggest that video game playing confers no reliable benefit in sensorimotor control, but it does enhance sensorimotor learning, enabling superior performance in tasks with consistent and predictable structure. Copyright © 2014. Published by Elsevier B.V.

  19. Web-based remote video monitoring system implemented using Java technology

    Science.gov (United States)

    Li, Xiaoming

    2012-04-01

    A HTTP based video transmission system has been built upon the p2p(peer to peer) network structure utilizing the Java technologies. This makes the video monitoring available to any host which has been connected to the World Wide Web in any method, including those hosts behind firewalls or in isolated sub-networking. In order to achieve this, a video source peer has been developed, together with the client video playback peer. The video source peer can respond to the video stream request in HTTP protocol. HTTP based pipe communication model is developed to speeding the transmission of video stream data, which has been encoded into fragments using the JPEG codec. To make the system feasible in conveying video streams between arbitrary peers on the web, a HTTP protocol based relay peer is implemented as well. This video monitoring system has been applied in a tele-robotic system as a visual feedback to the operator.

  20. Video sensor architecture for surveillance applications.

    Science.gov (United States)

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  1. Video Sensor Architecture for Surveillance Applications

    Directory of Open Access Journals (Sweden)

    José E. Simó

    2012-02-01

    Full Text Available This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  2. Application aware approach to compression and transmission of H.264 encoded video for automated and centralized transportation surveillance.

    Science.gov (United States)

    2012-10-01

    In this report we present a transportation video coding and wireless transmission system specically tailored to automated : vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channe...

  3. Automatic measurement for solid state track detectors

    International Nuclear Information System (INIS)

    Ogura, Koichi

    1982-01-01

    Since in solid state track detectors, their tracks are measured with a microscope, observers are forced to do hard works that consume time and labour. This causes to obtain poor statistic accuracy or to produce personal error. Therefore, many researches have been done to aim at simplifying and automating track measurement. There are two categories in automating the measurement: simple counting of the number of tracks and the requirements to know geometrical elements such as the size of tracks or their coordinates as well as the number of tracks. The former is called automatic counting and the latter automatic analysis. The method to generally evaluate the number of tracks in automatic counting is the estimation of the total number of tracks in the total detector area or in a field of view of a microscope. It is suitable for counting when the track density is higher. The method to count tracks one by one includes the spark counting and the scanning microdensitometer. Automatic analysis includes video image analysis in which the high quality images obtained with a high resolution video camera are processed with a micro-computer, and the tracks are automatically recognized and measured by feature extraction. This method is described in detail. In many kinds of automatic measurements reported so far, frequently used ones are ''spark counting'' and ''video image analysis''. (Wakatsuki, Y.)

  4. VISDTA: A video imaging system for detection, tracking, and assessment: Prototype development and concept demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Pritchard, D.A.

    1987-05-01

    It has been demonstrated that thermal imagers are an effective surveillance and assessment tool for security applications because: (1) they work day or night due to their sensitivity to thermal signatures; (2) penetrability through fog, rain, dust, etc., is better than human eyes; (3) short or long range operation is possible with various optics; and (4) they are strictly passive devices providing visible imagery which is readily interpreted by the operator with little training. Unfortunately, most thermal imagers also require the setup of a tripod, connection of batteries, cables, display, etc. When this is accomplished, the operator must manually move the camera back and forth searching for signs of aggressor activity. VISDTA is designed to provide automatic panning, and in a sense, ''watch'' the imagery in place of the operator. The idea behind the development of VISDTA is to provide a small, portable, rugged system to automatically scan areas and detect targets by computer processing of images. It would use a thermal imager and possibly an intensified day/night TV camera, a pan/ tilt mount, and a computer for system control. If mounted on a dedicated vehicle or on a tower, VISDTA will perform video motion detection functions on incoming video imagery, and automatically scan predefined patterns in search of abnormal conditions which may indicate attempted intrusions into the field-of-regard. In that respect, VISDTA is capable of improving the ability of security forces to maintain security of a given area of interest by augmenting present techniques and reducing operator fatigue.

  5. Classification of dual language audio-visual content: Introduction to the VideoCLEF 2008 pilot benchmark evaluation task

    NARCIS (Netherlands)

    Larson, M.; Newman, E.; Jones, G.J.F.; Köhler, J.; Larson, M.; de Jong, F.M.G.; Kraaij, W.; Ordelman, R.J.F.

    2008-01-01

    VideoCLEF is a new track for the CLEF 2008 campaign. This track aims to develop and evaluate tasks in analyzing multilingual video content. A pilot of a Vid2RSS task involving assigning thematic class labels to video kicks off the VideoCLEF track in 2008. Task participants deliver classification

  6. Function integrated track system

    OpenAIRE

    Hohnecker, Eberhard

    2010-01-01

    The paper discusses a function integrated track system that focuses on the reduction of acoustic emissions from railway lines. It is shown that the combination of an embedded rail system (ERS), a sound absorbing track surface, and an integrated mini sound barrier has significant acoustic advantages compared to a standard ballast superstructure. The acoustic advantages of an embedded rail system are particularly pronounced in the case of railway bridges. Finally, it is shown that a...

  7. Video redaction: a survey and comparison of enabling technologies

    Science.gov (United States)

    Sah, Shagan; Shringi, Ameya; Ptucha, Raymond; Burry, Aaron; Loce, Robert

    2017-09-01

    With the prevalence of video recordings from smart phones, dash cams, body cams, and conventional surveillance cameras, privacy protection has become a major concern, especially in light of legislation such as the Freedom of Information Act. Video redaction is used to obfuscate sensitive and personally identifiable information. Today's typical workflow involves simple detection, tracking, and manual intervention. Automated methods rely on accurate detection mechanisms being paired with robust tracking methods across the video sequence to ensure the redaction of all sensitive information while minimizing spurious obfuscations. Recent studies have explored the use of convolution neural networks and recurrent neural networks for object detection and tracking. The present paper reviews the redaction problem and compares a few state-of-the-art detection, tracking, and obfuscation methods as they relate to redaction. The comparison introduces an evaluation metric that is specific to video redaction performance. The metric can be evaluated in a manner that allows balancing the penalty for false negatives and false positives according to the needs of particular application, thereby assisting in the selection of component methods and their associated hyperparameters such that the redacted video has fewer frames that require manual review.

  8. An insight on advantage of hybrid sun–wind-tracking over sun-tracking PV system

    International Nuclear Information System (INIS)

    Rahimi, Masoud; Banybayat, Meisam; Tagheie, Yaghoub; Valeh-e-Sheyda, Peyvand

    2015-01-01

    Graphical abstract: Real photograph of hybrid sun–wind-tracking system. - Highlights: • Novel hybrid sun–wind-tracking system proposed to enhance PV cell performance. • The wind tracker can cool down the PV cell as sun-tracking system work. • The hybrid tracker achieved 7.4% increase in energy gain over the sun tracker. • The overall daily output energy gain was increased by 49.83% by using this system. - Abstract: This paper introduces the design and application of a novel hybrid sun–wind-tracking system. This hybrid system employs cooling effect of wind, besides the advantages of tracking sun for enhancing power output from examined hybrid photovoltaic cell. The principal experiment focuses on comparison between dual-axes sun-tracking and hybrid sun–wind-tracking photovoltaic (PV) panels. The deductions based on the research tests confirm that the overall daily output energy gain was increased by 49.83% compared with that of a fixed system. Moreover, an overall increase of about 7.4% in the output power was found for the hybrid sun–wind-tracking over the two-axis sun tracking system.

  9. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Lei Qin

    2014-05-01

    Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.

  10. Electronic evaluation for video commercials by impression index.

    Science.gov (United States)

    Kong, Wanzeng; Zhao, Xinxin; Hu, Sanqing; Vecchiato, Giovanni; Babiloni, Fabio

    2013-12-01

    How to evaluate the effect of commercials is significantly important in neuromarketing. In this paper, we proposed an electronic way to evaluate the influence of video commercials on consumers by impression index. The impression index combines both the memorization and attention index during consumers observing video commercials by tracking the EEG activity. It extracts features from scalp EEG to evaluate the effectiveness of video commercials in terms of time-frequency-space domain. And, the general global field power was used as an impression index for evaluation of video commercial scenes as time series. Results of experiment demonstrate that the proposed approach is able to track variations of the cerebral activity related to cognitive task such as observing video commercials, and help to judge whether the scene in video commercials is impressive or not by EEG signals.

  11. Tracking errors in a prototype real-time tumour tracking system

    International Nuclear Information System (INIS)

    Sharp, Gregory C; Jiang, Steve B; Shimizu, Shinichi; Shirato, Hiroki

    2004-01-01

    In motion-compensated radiation therapy, radio-opaque markers can be implanted in or near a tumour and tracked in real-time using fluoroscopic imaging. Tracking these implanted markers gives highly accurate position information, except when tracking fails due to poor or ambiguous imaging conditions. This study investigates methods for automatic detection of tracking errors, and assesses the frequency and impact of tracking errors on treatments using the prototype real-time tumour tracking system. We investigated four indicators for automatic detection of tracking errors, and found that the distance between corresponding rays was most effective. We also found that tracking errors cause a loss of gating efficiency of between 7.6 and 10.2%. The incidence of treatment beam delivery during tracking errors was estimated at between 0.8% and 1.25%

  12. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  13. Energy Efficient Hybrid Dual Axis Solar Tracking System

    Directory of Open Access Journals (Sweden)

    Rashid Ahammed Ferdaus

    2014-01-01

    Full Text Available This paper describes the design and implementation of an energy efficient solar tracking system from a normal mechanical single axis to a hybrid dual axis. For optimizing the solar tracking mechanism electromechanical systems were evolved through implementation of different evolutional algorithms and methodologies. To present the tracker, a hybrid dual-axis solar tracking system is designed, built, and tested based on both the solar map and light sensor based continuous tracking mechanism. These light sensors also compare the darkness and cloudy and sunny conditions assisting daily tracking. The designed tracker can track sun’s apparent position at different months and seasons; thereby the electrical controlling device requires a real time clock device for guiding the tracking system in seeking solar position for the seasonal motion. So the combination of both of these tracking mechanisms made the designed tracker a hybrid one. The power gain and system power consumption are compared with a static and continuous dual axis solar tracking system. It is found that power gain of hybrid dual axis solar tracking system is almost equal to continuous dual axis solar tracking system, whereas the power saved in system operation by the hybrid tracker is 44.44% compared to the continuous tracking system.

  14. Thermal Tracking of Sports Players

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    We present here a real-time tracking algorithm for thermal video from a sports game. Robust detection of people includes routines for handling occlusions and noise before tracking each detected person with a Kalman filter. This online tracking algorithm is compared with a state-of-the-art offline...

  15. Noise aliasing in interline-video-based fluoroscopy systems

    International Nuclear Information System (INIS)

    Lai, H.; Cunningham, I.A.

    2002-01-01

    Video-based imaging systems for continuous (nonpulsed) x-ray fluoroscopy use a variety of video formats. Conventional video-camera systems may operate in either interlaced or progressive-scan modes, and CCD systems may operate in interline- or frame-transfer modes. A theoretical model of the image noise power spectrum corresponding to these formats is described. It is shown that with respect to frame-transfer or progressive-readout modes, interline or interlaced cameras operating in a frame-integration mode will result in a spectral shift of 25% of the total image noise power from low spatial frequencies to high. In a field-integration mode, noise power is doubled with most of the increase occurring at high spatial frequencies. The differences are due primarily to the effect of noise aliasing. In interline or interlaced formats, alternate lines are obtained with each video field resulting in a vertical sampling frequency for noise that is one half of the physical sampling frequency. The extent of noise aliasing is modified by differences in the statistical correlations between video fields in the different modes. The theoretical model is validated with experiments using an x-ray image intensifier and CCD-camera system. It is shown that different video modes affect the shape of the noise-power spectrum and therefore the detective quantum efficiency. While the effect on observer performance is not addressed, it is concluded that in order to minimize image noise at the critical mid-to-high spatial frequencies for a specified x-ray exposure, fluoroscopic systems should use only frame-transfer (CCD camera) or progressive-scan (conventional video) formats

  16. Optical eye tracking system for real-time noninvasive tumor localization in external beam radiotherapy

    International Nuclear Information System (INIS)

    Via, Riccardo; Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Ciocca, Mario; Riboldi, Marco; Baroni, Guido; Orecchia, Roberto

    2015-01-01

    Purpose: External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Methods: Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Results: Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. Conclusions: A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring

  17. Acquisition, tracking, and pointing IV; Proceedings of the Meeting, Orlando, FL, Apr. 19, 20, 1990

    Science.gov (United States)

    Gowrinathan, Sankaran

    1990-09-01

    Various papers on acquisition, tracking, and pointing are presented. Individual topics addressed include: backlash control techniques in geared servo mechanics; optical fiber and photodetector array for robotic seam tracking; star trackers for spacecraft applications; Starfire optical range tracking system for the 1.5 m telescope; real-time video image centroid tracker; optical alignment with a beamwalk system; line-of-sight stabilization requirements for target tracking system; image quality with narrow beam illumination in an active tracking system; IR sensor data fusion for target detection, identification, and tracking; target location and pointing algorithm for a three-axis stabilized line scanner. Also discussed are: adaptive control system techniques applied to inertial stabilization systems; supervisory control of electrooptic tracking and pointing; position loop compensation for flex-pivot-mounted gimbal stabilization systems; advanced testing methods for acquisition, tracking, and pointing; development of kinmatics for gimballed mirror systems.

  18. A low-cost test-bed for real-time landmark tracking

    Science.gov (United States)

    Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher

    2007-04-01

    A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.

  19. Interactive Videos Enhance Learning about Socio-Ecological Systems

    Science.gov (United States)

    Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean

    2018-01-01

    Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…

  20. CONTRACT ADMINISTRATIVE TRACKING SYSTEM (CATS)

    Science.gov (United States)

    The Contract Administrative Tracking System (CATS) was developed in response to an ORD NHEERL, Mid-Continent Ecology Division (MED)-recognized need for an automated tracking and retrieval system for Cost Reimbursable Level of Effort (CR/LOE) Contracts. CATS is an Oracle-based app...

  1. Video game training and the reward system.

    Science.gov (United States)

    Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  2. Video Game Use and Cognitive Performance: Does It Vary with the Presence of Problematic Video Game Use?

    OpenAIRE

    Collins, Emily; Freeman, Jonathan

    2014-01-01

    Action video game players have been found to outperform nonplayers on a variety of cognitive tasks. However, several failures to replicate these video game player advantages have indicated that this relationship may not be straightforward. Moreover, despite the discovery that problematic video game players do not appear to demonstrate the same superior performance as nonproblematic video game players in relation to multiple object tracking paradigms, this has not been investigated for other t...

  3. Implementation of an RBF neural network on embedded systems: real-time face tracking and identity verification.

    Science.gov (United States)

    Yang, Fan; Paindavoine, M

    2003-01-01

    This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.

  4. All-automatic swimmer tracking system based on an optimized scaled composite JTC technique

    Science.gov (United States)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2016-04-01

    In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.

  5. Staff acceptance of video monitoring for coordination: a video system to support perioperative situation awareness.

    Science.gov (United States)

    Kim, Young Ju; Xiao, Yan; Hu, Peter; Dutton, Richard

    2009-08-01

    To understand staff acceptance of a remote video monitoring system for operating room (OR) coordination. Improved real-time remote visual access to OR may enhance situational awareness but also raises privacy concerns for patients and staff. Survey. A system was implemented in a six-room surgical suite to display OR monitoring video at an access restricted control desk area. Image quality was manipulated to improve staff acceptance. Two months after installation, interviews and a survey were conducted on staff acceptance of video monitoring. About half of all OR personnel responded (n = 63). Overall levels of concerns were low, with 53% rated no concerns and 42% little concern. Top two reported uses of the video were to see if cases are finished and to see if a room is ready. Viewing the video monitoring system as useful did not reduce levels of concern. Staff in supervisory positions perceived less concern about the system's impact on privacy than did those supervised (p < 0.03). Concerns for patient privacy correlated with concerns for staff privacy and performance monitoring. Technical means such as manipulating image quality helped staff acceptance. Manipulation of image quality resulted overall acceptance of monitoring video, with residual levels of concerns. OR nurses may express staff privacy concern in the form of concerns over patient privacy. This study provided suggestions for technological and implementation strategies of video monitoring for coordination use in OR. Deployment of communication technology and integration of clinical information will likely raise concerns over staff privacy and performance monitoring. The potential gain of increased information access may be offset by negative impact of a sense of loss of autonomy.

  6. Multithreaded hybrid feature tracking for markerless augmented reality.

    Science.gov (United States)

    Lee, Taehee; Höllerer, Tobias

    2009-01-01

    We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.

  7. Intelligent keyframe extraction for video printing

    Science.gov (United States)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  8. Fall detection in the elderly by head-tracking

    OpenAIRE

    Yu, Miao; Naqvi, Syed Mohsen; Chambers, Jonathan

    2009-01-01

    In the paper, we propose a fall detection method based on head tracking within a smart home environment equipped with video cameras. A motion history image and code-book background subtraction are combined to determine whether large movement occurs within the scene. Based on the magnitude of the movement information, particle filters with different state models are used to track the head. The head tracking procedure is performed in two video streams taken bytwoseparatecamerasandthree-dimension...

  9. New Management Tools – From Video Management Systems to Business Decision Systems

    Directory of Open Access Journals (Sweden)

    Emilian Cristian IRIMESCU

    2015-06-01

    Full Text Available In the last decades management was characterized by the increased use of Business Decision Systems, also called Decision Support Systems. More than that, systems that were until now used in a traditional way, for some simple activities (like security, migrated to the decision area of management. Some examples are the Video Management Systems from the physical security activity. This article will underline the way Video Management Systems passed to Business Decision Systems, which are the advantages of use thereof and which are the trends in this industry. The article will also analyze if at this moment Video Management Systems are real Business Decision Systems or if there are some functions missing to rank them at this level.

  10. Unattended digital video surveillance: A system prototype for EURATOM safeguards

    International Nuclear Information System (INIS)

    Chare, P.; Goerten, J.; Wagner, H.; Rodriguez, C.; Brown, J.E.

    1994-01-01

    Ever increasing capabilities in video and computer technology have changed the face of video surveillance. From yesterday's film and analog video tape-based systems, we now emerge into the digital era with surveillance systems capable of digital image processing, image analysis, decision control logic, and random data access features -- all of which provide greater versatility with the potential for increased effectiveness in video surveillance. Digital systems also offer other advantages such as the ability to ''compress'' data, providing increased storage capacities and the potential for allowing longer surveillance Periods. Remote surveillance and system to system communications are also a benefit that can be derived from digital surveillance systems. All of these features are extremely important in today's climate Of increasing safeguards activity and decreasing budgets -- Los Alamos National Laboratory's Safeguards Systems Group and the EURATOM Safeguards Directorate have teamed to design and implement a period surveillance system that will take advantage of the versatility of digital video for facility surveillance system that will take advantage of the versatility of digital video for facility surveillance and data review. In this Paper we will familiarize you with system components and features and report on progress in developmental areas such as image compression and region of interest processing

  11. Video Game Training and the Reward System

    Directory of Open Access Journals (Sweden)

    Robert C. Lorenz

    2015-02-01

    Full Text Available Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual towards playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training.Fifty healthy participants were randomly assigned to a video game training (TG or control group (CG. Before and after training/control period, functional magnetic resonance imaging (fMRI was conducted using a non-video game related reward task.At pretest, both groups showed strongest activation in ventral striatum (VS during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated.This longitudinal study revealed that video game training may preserve reward responsiveness in the ventral striatum in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  12. Video game training and the reward system

    Science.gov (United States)

    Lorenz, Robert C.; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training. PMID:25698962

  13. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  14. GPU-accelerated 3-D model-based tracking

    International Nuclear Information System (INIS)

    Brown, J Anthony; Capson, David W

    2010-01-01

    Model-based approaches to tracking the pose of a 3-D object in video are effective but computationally demanding. While statistical estimation techniques, such as the particle filter, are often employed to minimize the search space, real-time performance remains unachievable on current generation CPUs. Recent advances in graphics processing units (GPUs) have brought massively parallel computational power to the desktop environment and powerful developer tools, such as NVIDIA Compute Unified Device Architecture (CUDA), have provided programmers with a mechanism to exploit it. NVIDIA GPUs' single-instruction multiple-thread (SIMT) programming model is well-suited to many computer vision tasks, particularly model-based tracking, which requires several hundred 3-D model poses to be dynamically configured, rendered, and evaluated against each frame in the video sequence. Using 6 degree-of-freedom (DOF) rigid hand tracking as an example application, this work harnesses consumer-grade GPUs to achieve real-time, 3-D model-based, markerless object tracking in monocular video.

  15. Assisting doctors on assessing movements in infants using motion tracking

    DEFF Research Database (Denmark)

    Olsen, Mikkel; Herskind, Anna; Nielsen, Jens Bo

    2015-01-01

    In this work, we consider the possibilities of having an automatic computer-based system for tracking the movements of infants. An existing motion tracking system is used to process recorded video sequences containing both color and spatial information of the infant's body pose and movements....... The system uses these sequences of data to estimate the underlying skeleton of the infant and parametrize the movements. Post-processing of these parameters can yield objective measurements of an infant's movement patterns. This could e.g. be quantification of (a)symmetry and recognition of certain gestures/actions...

  16. Application of one-axis sun tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Sefa, Ibrahim; Demirtas, Mehmet; Colak, Ilhami [Gazi University, Faculty of Technical Education, Department of Electrical Education, GEMEC-Gazi Electric Machines and Control Group, Ankara (Turkey)

    2009-11-15

    This paper introduces design and application of a novel one-axis sun tracking system which follows the position of the sun and allows investigating effects of one-axis tracking system on the solar energy in Turkey. The tracking system includes a serial communication interface based on RS 485 to monitor whole processes on a computer screen and to plot data as graphic. In addition, system parameters such as the current, the voltage and the panel position have been observed by means of a microcontroller. The energy collected is measured and compared with a fixed solar system for the same solar panel. The results show that the solar energy collected on the tracking system is considerably much efficient than the fixed system. The tracking system developed in this study provides easy installation, simple mechanism and less maintenance. (author)

  17. Application of one-axis sun tracking system

    International Nuclear Information System (INIS)

    Sefa, Ibrahim; Demirtas, Mehmet; Colak, Ilhami

    2009-01-01

    This paper introduces design and application of a novel one-axis sun tracking system which follows the position of the sun and allows investigating effects of one-axis tracking system on the solar energy in Turkey. The tracking system includes a serial communication interface based on RS 485 to monitor whole processes on a computer screen and to plot data as graphic. In addition, system parameters such as the current, the voltage and the panel position have been observed by means of a microcontroller. The energy collected is measured and compared with a fixed solar system for the same solar panel. The results show that the solar energy collected on the tracking system is considerably much efficient than the fixed system. The tracking system developed in this study provides easy installation, simple mechanism and less maintenance.

  18. Energy Systems Integration Facility Videos | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid

  19. Understanding Learning Style by Eye Tracking in Slide Video Learning

    Science.gov (United States)

    Cao, Jianxia; Nishihara, Akinori

    2012-01-01

    More and more videos are now being used in e-learning context. For improving learning effect, to understand how students view the online video is important. In this research, we investigate how students deploy their attention when they learn through interactive slide video in the aim of better understanding observers' learning style. Felder and…

  20. Ultra-Wideband Tracking System Design for Relative Navigation

    Science.gov (United States)

    Ni, Jianjun David; Arndt, Dickey; Bgo, Phong; Dekome, Kent; Dusl, John

    2011-01-01

    This presentation briefly discusses a design effort for a prototype ultra-wideband (UWB) time-difference-of-arrival (TDOA) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being designed for use in localization and navigation of a rover in a GPS deprived environment for surface missions. In one application enabled by the UWB tracking, a robotic vehicle carrying equipments can autonomously follow a crewed rover from work site to work site such that resources can be carried from one landing mission to the next thereby saving up-mass. The UWB Systems Group at JSC has developed a UWB TDOA High Resolution Proximity Tracking System which can achieve sub-inch tracking accuracy of a target within the radius of the tracking baseline [1]. By extending the tracking capability beyond the radius of the tracking baseline, a tracking system is being designed to enable relative navigation between two vehicles for surface missions. A prototype UWB TDOA tracking system has been designed, implemented, tested, and proven feasible for relative navigation of robotic vehicles. Future work includes testing the system with the application code to increase the tracking update rate and evaluating the linear tracking baseline to improve the flexibility of antenna mounting on the following vehicle.

  1. Video-tracker trajectory analysis: who meets whom, when and where

    Science.gov (United States)

    Jäger, U.; Willersinn, D.

    2010-04-01

    Unveiling unusual or hostile events by observing manifold moving persons in a crowd is a challenging task for human operators, especially when sitting in front of monitor walls for hours. Typically, hostile events are rare. Thus, due to tiredness and negligence the operator may miss important events. In such situations, an automatic alarming system is able to support the human operator. The system incorporates a processing chain consisting of (1) people tracking, (2) event detection, (3) data retrieval, and (4) display of relevant video sequence overlaid by highlighted regions of interest. In this paper we focus on the event detection stage of the processing chain mentioned above. In our case, the selected event of interest is the encounter of people. Although being based on a rather simple trajectory analysis, this kind of event embodies great practical importance because it paves the way to answer the question "who meets whom, when and where". This, in turn, forms the basis to detect potential situations where e.g. money, weapons, drugs etc. are handed over from one person to another in crowded environments like railway stations, airports or busy streets and places etc.. The input to the trajectory analysis comes from a multi-object video-based tracking system developed at IOSB which is able to track multiple individuals within a crowd in real-time [1]. From this we calculate the inter-distances between all persons on a frame-to-frame basis. We use a sequence of simple rules based on the individuals' kinematics to detect the event mentioned above to output the frame number, the persons' IDs from the tracker and the pixel coordinates of the meeting position. Using this information, a data retrieval system may extract the corresponding part of the recorded video image sequence and finally allows for replaying the selected video clip with a highlighted region of interest to attract the operator's attention for further visual inspection.

  2. Activity-based exploitation of Full Motion Video (FMV)

    Science.gov (United States)

    Kant, Shashi

    2012-06-01

    Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.

  3. System design description for the LDUA common video end effector system (CVEE)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The Common Video End Effector System (CVEE), system 62-60, was designed by the Idaho National Engineering Laboratory (INEL) to provide the control interface of the various video end effectors used on the LDUA. The CVEE system consists of a Support Chassis which contains the input and output Opto-22 modules, relays, and power supplies and the Power Chassis which contains the bipolar supply and other power supplies. The combination of the Support Chassis and the Power Chassis make up the CVEE system. The CVEE system is rack mounted in the At Tank Instrument Enclosure (ATIE). Once connected it is controlled using the LDUA supervisory data acquisition system (SDAS). Video and control status will be displayed on monitors within the LDUA control center

  4. Automated Video Surveillance for the Study of Marine Mammal Behavior and Cognition

    Directory of Open Access Journals (Sweden)

    Jeremy Karnowski

    2016-11-01

    Full Text Available Systems for detecting and tracking social marine mammals, including dolphins, can provide data to help explain their social dynamics, predict their behavior, and measure the impact of human interference. Data collected from video surveillance methods can be consistently and systematically sampled for studies of behavior, and frame-by-frame analyses can uncover insights impossible to observe from real-time, freely occurring natural behavior. Advances in boat-based, aerial, and underwater recording platforms provide opportunities to document the behavior of marine mammals and create massive datasets. The use of human experts to detect, track, identify individuals, and recognize activity in video demands significant time and financial investment. This paper examines automated methods designed to analyze large video corpora containing marine mammals. While research is converging on best solutions for some automated tasks, particularly detection and classification, many research domains are ripe for exploration.

  5. Effect Through Broadcasting System Access Point For Video Transmission

    Directory of Open Access Journals (Sweden)

    Leni Marlina

    2015-08-01

    Full Text Available Most universities are already implementing wired and wireless network that is used to access integrated information systems and the Internet. At present it is important to do research on the influence of the broadcasting system through the access point for video transmitter learning in the university area. At every university computer network through the access point must also use the cable in its implementation. These networks require cables that will connect and transmit data from one computer to another computer. While wireless networks of computers connected through radio waves. This research will be a test or assessment of how the influence of the network using the WLAN access point for video broadcasting means learning from the server to the client. Instructional video broadcasting from the server to the client via the access point will be used for video broadcasting means of learning. This study aims to understand how to build a wireless network by using an access point. It also builds a computer server as instructional videos supporting software that can be used for video server that will be emitted by broadcasting via the access point and establish a system of transmitting video from the server to the client via the access point.

  6. Real-Time Tumor Tracking in the Lung Using an Electromagnetic Tracking System

    Energy Technology Data Exchange (ETDEWEB)

    Shah, Amish P., E-mail: Amish.Shah@orlandohealth.com [Department of Radiation Oncology, MD Anderson Cancer Center Orlando, Orlando, Florida (United States); Kupelian, Patrick A.; Waghorn, Benjamin J.; Willoughby, Twyla R.; Rineer, Justin M.; Mañon, Rafael R.; Vollenweider, Mark A.; Meeks, Sanford L. [Department of Radiation Oncology, MD Anderson Cancer Center Orlando, Orlando, Florida (United States)

    2013-07-01

    Purpose: To describe the first use of the commercially available Calypso 4D Localization System in the lung. Methods and Materials: Under an institutional review board-approved protocol and an investigational device exemption from the US Food and Drug Administration, the Calypso system was used with nonclinical methods to acquire real-time 4-dimensional lung tumor tracks for 7 lung cancer patients. The aims of the study were to investigate (1) the potential for bronchoscopic implantation; (2) the stability of smooth-surface beacon transponders (transponders) after implantation; and (3) the ability to acquire tracking information within the lung. Electromagnetic tracking was not used for any clinical decision making and could only be performed before any radiation delivery in a research setting. All motion tracks for each patient were reviewed, and values of the average displacement, amplitude of motion, period, and associated correlation to a sinusoidal model (R{sup 2}) were tabulated for all 42 tracks. Results: For all 7 patients at least 1 transponder was successfully implanted. To assist in securing the transponder at the tumor site, it was necessary to implant a secondary fiducial for most transponders owing to the transponder's smooth surface. For 3 patients, insertion into the lung proved difficult, with only 1 transponder remaining fixed during implantation. One patient developed a pneumothorax after implantation of the secondary fiducial. Once implanted, 13 of 14 transponders remained stable within the lung and were successfully tracked with the tracking system. Conclusions: Our initial experience with electromagnetic guidance within the lung demonstrates that transponder implantation and tracking is achievable though not clinically available. This research investigation proved that lung tumor motion exhibits large variations from fraction to fraction within a single patient and that improvements to both transponder and tracking system are still

  7. Thinking Tracks for Integrated Systems Design

    NARCIS (Netherlands)

    Bonnema, Gerrit Maarten; Denkena, B.; Gausemeijer, J.; Scholz-Reiter, B.

    2012-01-01

    The paper investigates systems thinking and systems engineering. After a short literature review, the paper presents, as a means for systems thinking, twelve thinking tracks. The tracks can be used as creativity starter, checklist, and as means to investigate effects of design decisions taken early

  8. FPGA Implementation of Video Transmission System Based on LTE

    Directory of Open Access Journals (Sweden)

    Lu Yan

    2015-01-01

    Full Text Available In order to support high-definition video transmission, an implementation of video transmission system based on Long Term Evolution is designed. This system is developed on Xilinx Virtex-6 FPGA ML605 Evaluation Board. The paper elaborates the features of baseband link designed in Xilinx ISE and protocol stack designed in Xilinx SDK, and introduces the process of setting up hardware and software platform in Xilinx XPS. According to test, this system consumes less hardware resource and is able to transmit bidirectional video clearly and stably.

  9. A content-based news video retrieval system: NVRS

    Science.gov (United States)

    Liu, Huayong; He, Tingting

    2009-10-01

    This paper focus on TV news programs and design a content-based news video browsing and retrieval system, NVRS, which is convenient for users to fast browsing and retrieving news video by different categories such as political, finance, amusement, etc. Combining audiovisual features and caption text information, the system automatically segments a complete news program into separate news stories. NVRS supports keyword-based news story retrieval, category-based news story browsing and generates key-frame-based video abstract for each story. Experiments show that the method of story segmentation is effective and the retrieval is also efficient.

  10. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    Science.gov (United States)

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  11. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  12. Real-time vehicle detection and tracking in video based on faster R-CNN

    Science.gov (United States)

    Zhang, Yongjie; Wang, Jian; Yang, Xin

    2017-08-01

    Vehicle detection and tracking is a significant part in auxiliary vehicle driving system. Using the traditional detection method based on image information has encountered enormous difficulties, especially in complex background. To solve this problem, a detection method based on deep learning, Faster R-CNN, which has very high detection accuracy and flexibility, is introduced. An algorithm of target tracking with the combination of Camshift and Kalman filter is proposed for vehicle tracking. The computation time of Faster R-CNN cannot achieve realtime detection. We use multi-thread technique to detect and track vehicle by parallel computation for real-time application.

  13. Video-processing-based system for automated pedestrian data collection and analysis when crossing the street

    Science.gov (United States)

    Mansouri, Nabila; Watelain, Eric; Ben Jemaa, Yousra; Motamed, Cina

    2018-03-01

    Computer-vision techniques for pedestrian detection and tracking have progressed considerably and become widely used in several applications. However, a quick glance at the literature shows a minimal use of these techniques in pedestrian behavior and safety analysis, which might be due to the technical complexities facing the processing of pedestrian videos. To extract pedestrian trajectories from a video automatically, all road users must be detected and tracked during sequences, which is a challenging task, especially in a congested open-outdoor urban space. A multipedestrian tracker based on an interframe-detection-association process was proposed and evaluated. The tracker results are used to implement an automatic tool for pedestrians data collection when crossing the street based on video processing. The variations in the instantaneous speed allowed the detection of the street crossing phases (approach, waiting, and crossing). These were addressed for the first time in the pedestrian road security analysis to illustrate the causal relationship between pedestrian behaviors in the different phases. A comparison with a manual data collection method, by computing the root mean square error and the Pearson correlation coefficient, confirmed that the procedures proposed have significant potential to automate the data collection process.

  14. IODA - a fast, automated and flexible system for ion track analysis on film detectors

    International Nuclear Information System (INIS)

    Guth, H.; Hellmann, A.

    1995-02-01

    The IODA System (Ion Density Analysis) is used to analyse detector films, resulting from experiments at the pulse power generator KALIF (Karlsruhe Light Ion Facility). The system consists of evaluation software and a microcomputer, which controls a microscope, a video interface, and a multiprocessor subsystem. The segmentation of ion tracks is done automatically by means of digital image processing and pattern recognition. After defining an evaluation range and selecting a suitable analysis method, the film is scanned by the microscope for counting the impacts of the underlying image. According to the appearance of the ion tracks on the film, different methods can be selected. The evaluation results representing the ion density are stored in a matrix. The time needed for an evaluation at a high resolution can be shortened by shipping time consuming pattern recognition calculations to the multiprocessor subsystem. The bottlenecks of the system are the data transfer and the speed of the microscope stage. Simple handling of the system even on alphanumeric terminals had been an important design issue. This was implemented by a logically structured menue system including online help features. This report can be used a s a manual to support the user with system operation. (orig.) [de

  15. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  16. Nighttime vision-based car detection and tracking for smart road lighting system

    NARCIS (Netherlands)

    Matsiki, D.; Shrestha, P.; With, de P.H.N.

    2011-01-01

    The objective of this paper is to detect cars in nighttime videos for controlling the illumination of level of road lights, thereby saving power consumption. We present an e??ective method to detect and track cars based on the presence of head lights or rear lights. We detect the headlights and rear

  17. NucliTrack: an integrated nuclei tracking application.

    Science.gov (United States)

    Cooper, Sam; Barr, Alexis R; Glen, Robert; Bakal, Chris

    2017-10-15

    Live imaging studies give unparalleled insight into dynamic single cell behaviours and fate decisions. However, the challenge of reliably tracking single cells over long periods of time limits both the throughput and ease with which such studies can be performed. Here, we present NucliTrack, a cross platform solution for automatically segmenting, tracking and extracting features from fluorescently labelled nuclei. NucliTrack performs similarly to other state-of-the-art cell tracking algorithms, but NucliTrack's interactive, graphical interface makes it significantly more user friendly. NucliTrack is available as a free, cross platform application and open source Python package. Installation details and documentation are at: http://nuclitrack.readthedocs.io/en/latest/ A video guide can be viewed online: https://www.youtube.com/watch?v=J6e0D9F-qSU Source code is available through Github: https://github.com/samocooper/nuclitrack. A Matlab toolbox is also available at: https://uk.mathworks.com/matlabcentral/fileexchange/61479-samocooper-nuclitrack-matlab. sam@socooper.com. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  18. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video

    Directory of Open Access Journals (Sweden)

    Gil-beom Lee

    2017-03-01

    Full Text Available Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  19. Quality assurance tracking and trending system (QATTS)

    International Nuclear Information System (INIS)

    Anderson, W.J.

    1987-01-01

    In 1984, The Philadelphia Electric Company (PECo) Quality Assurance (QA) Division recognized a need to modify the existing quality finding tracking program to generate a nuclear trending program that could detect trends of PECo-initiated findings that were not detectable to a day-to-day observer. Before 1984, each quality organization in PECo had a separate tracking system. An adequate quality trending program demanded that all findings be tracked in a common data base. The Quality Assurance Tracking and Trending System (QATTS) is divided into two parts, an on-line subsystem that provides access to QATTS data via corporate computer data screens and a reports and graphics subsystem that connects commercially available reports and graphic software computer packages to the QATTS data base. The QATTS can be accessed from any terminal connected to the main frame computer at PECo headquarters. The paper discusses the tracking system, report generation, responsible organization commitment tracking system (ROCT), and trending program

  20. Novel high accurate sensorless dual-axis solar tracking system controlled by maximum power point tracking unit of photovoltaic systems

    International Nuclear Information System (INIS)

    Fathabadi, Hassan

    2016-01-01

    Highlights: • Novel high accurate sensorless dual-axis solar tracker. • It has the advantages of both sensor based and sensorless solar trackers. • It does not have the disadvantages of sensor based and sensorless solar trackers. • Tracking error of only 0.11° that is less than the tracking errors of others. • An increase of 28.8–43.6% depending on the seasons in the energy efficiency. - Abstract: In this study, a novel high accurate sensorless dual-axis solar tracker controlled by the maximum power point tracking unit available in almost all photovoltaic systems is proposed. The maximum power point tracking controller continuously calculates the maximum output power of the photovoltaic module/panel/array, and uses the altitude and azimuth angles deviations to track the sun direction where the greatest value of the maximum output power is extracted. Unlike all other sensorless solar trackers, the proposed solar tracking system is a closed loop system which means it uses the actual direction of the sun at any time to track the sun direction, and this is the contribution of this work. The proposed solar tracker has the advantages of both sensor based and sensorless dual-axis solar trackers, but it does not have their disadvantages. Other sensorless solar trackers all are open loop, i.e., they use offline estimated data about the sun path in the sky obtained from solar map equations, so low exactness, cloudy sky, and requiring new data for new location are their problems. A photovoltaic system has been built, and it is experimentally verified that the proposed solar tracking system tracks the sun direction with the tracking error of 0.11° which is less than the tracking errors of other both sensor based and sensorless solar trackers. An increase of 28.8–43.6% depending on the seasons in the energy efficiency is the main advantage of utilizing the proposed solar tracking system.

  1. Modular Track System For Positioning Mobile Robots

    Science.gov (United States)

    Miller, Jeff

    1995-01-01

    Conceptual system for positioning mobile robotic manipulators on large main structure includes modular tracks and ancillary structures assembled easily along with main structure. System, called "tracked robotic location system" (TROLS), originally intended for application to platforms in outer space, but TROLS concept might also prove useful on Earth; for example, to position robots in factories and warehouses. T-cross-section rail keeps mobile robot on track. Bar codes mark locations along track. Each robot equipped with bar-code-recognizing circuitry so it quickly finds way to assigned location.

  2. Multiple instance learning tracking method with local sparse representation

    KAUST Repository

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  3. Video copy protection and detection framework (VPD) for e-learning systems

    Science.gov (United States)

    ZandI, Babak; Doustarmoghaddam, Danial; Pour, Mahsa R.

    2013-03-01

    This Article reviews and compares the copyright issues related to the digital video files, which can be categorized as contended based and Digital watermarking copy Detection. Then we describe how to protect a digital video by using a special Video data hiding method and algorithm. We also discuss how to detect the copy right of the file, Based on expounding Direction of the technology of the video copy detection, and Combining with the own research results, brings forward a new video protection and copy detection approach in terms of plagiarism and e-learning systems using the video data hiding technology. Finally we introduce a framework for Video protection and detection in e-learning systems (VPD Framework).

  4. 40 CFR 73.30 - Allowance tracking system accounts.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Allowance tracking system accounts. 73.30 Section 73.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE ALLOWANCE SYSTEM Allowance Tracking System § 73.30 Allowance tracking system...

  5. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild

    KAUST Repository

    Mü ller, Matthias; Bibi, Adel Aamer; Giancola, Silvio; Al-Subaihi, Salman; Ghanem, Bernard

    2018-01-01

    Despite the numerous developments in object tracking, further development of current tracking algorithms is limited by small and mostly saturated datasets. As a matter of fact, data-hungry trackers based on deep-learning currently rely on object detection datasets due to the scarcity of dedicated large-scale tracking datasets. In this work, we present TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild. We provide more than 30K videos with more than 14 million dense bounding box annotations. Our dataset covers a wide selection of object classes in broad and diverse context. By releasing such a large-scale dataset, we expect deep trackers to further improve and generalize. In addition, we introduce a new benchmark composed of 500 novel videos, modeled with a distribution similar to our training dataset. By sequestering the annotation of the test set and providing an online evaluation server, we provide a fair benchmark for future development of object trackers. Deep trackers fine-tuned on a fraction of our dataset improve their performance by up to 1.6% on OTB100 and up to 1.7% on TrackingNet Test. We provide an extensive benchmark on TrackingNet by evaluating more than 20 trackers. Our results suggest that object tracking in the wild is far from being solved.

  6. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild

    KAUST Repository

    Müller, Matthias

    2018-03-28

    Despite the numerous developments in object tracking, further development of current tracking algorithms is limited by small and mostly saturated datasets. As a matter of fact, data-hungry trackers based on deep-learning currently rely on object detection datasets due to the scarcity of dedicated large-scale tracking datasets. In this work, we present TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild. We provide more than 30K videos with more than 14 million dense bounding box annotations. Our dataset covers a wide selection of object classes in broad and diverse context. By releasing such a large-scale dataset, we expect deep trackers to further improve and generalize. In addition, we introduce a new benchmark composed of 500 novel videos, modeled with a distribution similar to our training dataset. By sequestering the annotation of the test set and providing an online evaluation server, we provide a fair benchmark for future development of object trackers. Deep trackers fine-tuned on a fraction of our dataset improve their performance by up to 1.6% on OTB100 and up to 1.7% on TrackingNet Test. We provide an extensive benchmark on TrackingNet by evaluating more than 20 trackers. Our results suggest that object tracking in the wild is far from being solved.

  7. The modular integrated video system (MIVS)

    International Nuclear Information System (INIS)

    Schneider, S.L.; Sonnier, C.S.

    1987-01-01

    The Modular Integrated Video System (MIVS) is being developed for the International Atomic Energy Agency (IAEA) for use in facilities where mains power is available and the separation of the Camera and Recording Control Unit is desirable. The system is being developed under the US Program for Technical Assistance to the IAEA Safeguards (POTAS). The MIVS is designed to be a user-friendly system, allowing operation with minimal effort and training. The system software, through the use of a Liquid Crystal Display (LCD) and four soft keys, leads the inspector through the setup procedures to accomplish the intended surveillance or maintenance task. Review of surveillance data is accomplished with the use of a Portable Review Station. This Review Station will aid the inspector in the review process and determine the number of missed video scenes during a surveillance period

  8. Determining nest predators of the Least Bell's Vireo through point counts, tracking stations, and video photography

    Science.gov (United States)

    Peterson, Bonnie L.; Kus, Barbara E.; Deutschman, Douglas H.

    2004-01-01

    We compared three methods to determine nest predators of the Least Bell's Vireo (Vireo bellii pusillus) in San Diego County, California, during spring and summer 2000. Point counts and tracking stations were used to identify potential predators and video photography to document actual nest predators. Parental behavior at depredated nests was compared to that at successful nests to determine whether activity (frequency of trips to and from the nest) and singing vs. non-singing on the nest affected nest predation. Yellow-breasted Chats (Icteria virens) were the most abundant potential avian predator, followed by Western Scrub-Jays (Aphelocoma californica). Coyotes (Canis latrans) were abundant, with smaller mammalian predators occurring in low abundance. Cameras documented a 48% predation rate with scrub-jays as the major nest predators (67%), but Virginia opossums (Didelphis virginiana, 17%), gopher snakes (Pituophis melanoleucus, 8%) and Argentine ants (Linepithema humile, 8%) were also confirmed predators. Identification of potential predators from tracking stations and point counts demonstrated only moderate correspondence with actual nest predators. Parental behavior at the nest prior to depredation was not related to nest outcome.

  9. Computer controlled scanning systems for quantitative track measurements

    International Nuclear Information System (INIS)

    Gold, R.; Roberts, J.H.; Preston, C.C.; Ruddy, F.H.

    1982-01-01

    The status of three computer cntrolled systems for quantitative track measurements is described. Two systems, an automated optical track scanner (AOTS) and an automated scanning electron microscope (ASEM) are used for scanning solid state track recorders (SSTR). The third system, the emulsion scanning processor (ESP), is an interactive system used to measure the length of proton tracks in nuclear research emulsions (NRE). Recent advances achieved with these systems are presented, with emphasis placed upon the current limitation of these systems for reactor neutron dosimetry

  10. Students paperwork tracking system (SPATRASE)

    Science.gov (United States)

    Ishak, I. Y.; Othman, M. B.; Talib, Rahmat; Ilyas, M. A.

    2017-09-01

    This paper focused on a system for tracking the status of the paperwork using the Near Field Communication (NFC) technology and mobile apps. Student paperwork tracking system or known as SPATRASE was developed to ease the user to track the location status of the paperwork. The current problem faced by the user is the process of approval paperwork takes around a month or more. The process took around a month to get full approval from the department because of many procedures that need to be done. Nevertheless, the user cannot know the location status of the paperwork immediately because of the inefficient manual system. The submitter needs to call the student affairs department to get the information about the location status of the paperwork. Thus, this project was purposed as an alternative to solve the waiting time of the paperwork location status. The prototype of this system involved the hardware and software. The project consists of NFC tags, RFID Reader, and mobile apps. At each checkpoint, the RFID Reader was placed on the secretary desk. While the system involved the development of database using Google Docs that linked to the web server. After that, the submitter received the URL link and be directed to the web server and mobile apps. This system is capable of checking their location status tracking using mobile apps and Google Docs. With this system, it makes the tracking process become efficient and reliable to know the paperwork at the exact location. Thus, it is preventing the submitter to call the department all the time. Generally, this project is fully functional and we hope it can help Universiti Tun Hussein Onn Malaysia (UTHM) to overcome the problem of paperwork missing and location of the paperwork.

  11. Tracking of TV and video gaming during childhood: Iowa Bone Development Study.

    Science.gov (United States)

    Francis, Shelby L; Stancel, Matthew J; Sernulka-George, Frances D; Broffitt, Barbara; Levy, Steven M; Janz, Kathleen F

    2011-09-24

    Tracking studies determine the stability and predictability of specific phenomena. This study examined tracking of TV viewing (TV) and video game use (VG) from middle childhood through early adolescence after adjusting for moderate and vigorous physical activity (MVPA), percentage of body fat (% BF), and maturity. TV viewing and VG use were measured at ages 5, 8, 11, and 13 (n = 434) via parental- and self-report. MVPA was measured using the Actigraph, % BF using dual-energy x-ray absorptiometry, and maturity via Mirwald predictive equations. Generalized Estimating Equations (GEE) were used to assess stability and logistic regression was used to predict children "at risk" for maintaining sedentary behaviors. Additional models examined tracking only in overfat children (boys ≥ 25% BF; girls ≥ 32% BF). Data were collected from 1998 to 2007 and analyzed in 2010. The adjusted stability coefficients (GEE) for TV viewing were 0.35 (95% CI = 0.26, 0.44) for boys, 0.32 (0.23, 0.40) for girls, and 0.45 (0.27, 0.64) for overfat. For VG use, the adjusted stability coefficients were 0.14 (0.05, 0.24) for boys, 0.24 (0.10, 0.38) for girls, and 0.29 (0.08, 0.50) for overfat. The adjusted odds ratios (OR) for TV viewing were 3.2 (2.0, 5.2) for boys, 2.9 (1.9, 4.6) for girls, and 6.2 (2.2, 17.2) for overfat. For VG use, the OR were 1.8 (1.1, 3.1) for boys, 3.5 (2.1, 5.8) for girls, and 1.9 (0.6, 6.1) for overfat. TV viewing and VG use are moderately stable throughout childhood and predictive of later behavior. TV viewing appears to be more stable in younger children than VG use and more predictive of later behavior. Since habitual patterns of sedentarism in young children tend to continue to adolescence, early intervention strategies, particularly to reduce TV viewing, are warranted.

  12. Tracking of TV and video gaming during childhood: Iowa Bone Development Study

    Directory of Open Access Journals (Sweden)

    Broffitt Barbara

    2011-09-01

    Full Text Available Abstract Background Tracking studies determine the stability and predictability of specific phenomena. This study examined tracking of TV viewing (TV and video game use (VG from middle childhood through early adolescence after adjusting for moderate and vigorous physical activity (MVPA, percentage of body fat (% BF, and maturity. Methods TV viewing and VG use were measured at ages 5, 8, 11, and 13 (n = 434 via parental- and self-report. MVPA was measured using the Actigraph, % BF using dual-energy x-ray absorptiometry, and maturity via Mirwald predictive equations. Generalized Estimating Equations (GEE were used to assess stability and logistic regression was used to predict children "at risk" for maintaining sedentary behaviors. Additional models examined tracking only in overfat children (boys ≥ 25% BF; girls ≥ 32% BF. Data were collected from 1998 to 2007 and analyzed in 2010. Results The adjusted stability coefficients (GEE for TV viewing were 0.35 (95% CI = 0.26, 0.44 for boys, 0.32 (0.23, 0.40 for girls, and 0.45 (0.27, 0.64 for overfat. For VG use, the adjusted stability coefficients were 0.14 (0.05, 0.24 for boys, 0.24 (0.10, 0.38 for girls, and 0.29 (0.08, 0.50 for overfat. The adjusted odds ratios (OR for TV viewing were 3.2 (2.0, 5.2 for boys, 2.9 (1.9, 4.6 for girls, and 6.2 (2.2, 17.2 for overfat. For VG use, the OR were 1.8 (1.1, 3.1 for boys, 3.5 (2.1, 5.8 for girls, and 1.9 (0.6, 6.1 for overfat. Conclusions TV viewing and VG use are moderately stable throughout childhood and predictive of later behavior. TV viewing appears to be more stable in younger children than VG use and more predictive of later behavior. Since habitual patterns of sedentarism in young children tend to continue to adolescence, early intervention strategies, particularly to reduce TV viewing, are warranted.

  13. ViCoMo : visual context modeling for scene understanding in video surveillance

    NARCIS (Netherlands)

    Creusen, I.M.; Javanbakhti, S.; Loomans, M.J.H.; Hazelhoff, L.; Roubtsova, N.S.; Zinger, S.; With, de P.H.N.

    2013-01-01

    The use of contextual information can significantly aid scene understanding of surveillance video. Just detecting people and tracking them does not provide sufficient information to detect situations that require operator attention. We propose a proof-of-concept system that uses several sources of

  14. Anesthesia and fast-track in video-assisted thoracic surgery (VATS): from evidence to practice.

    Science.gov (United States)

    Umari, Marzia; Falini, Stefano; Segat, Matteo; Zuliani, Michele; Crisman, Marco; Comuzzi, Lucia; Pagos, Francesco; Lovadina, Stefano; Lucangelo, Umberto

    2018-03-01

    In thoracic surgery, the introduction of video-assisted thoracoscopic techniques has allowed the development of fast-track protocols, with shorter hospital lengths of stay and improved outcomes. The perioperative management needs to be optimized accordingly, with the goal of reducing postoperative complications and speeding recovery times. Premedication performed in the operative room should be wisely administered because often linked to late discharge from the post-anesthesia care unit (PACU). Inhalatory anesthesia, when possible, should be preferred based on protective effects on postoperative lung inflammation. Deep neuromuscular blockade should be pursued and carefully monitored, and an appropriate reversal administered before extubation. Management of one-lung ventilation (OLV) needs to be optimized to prevent not only intraoperative hypoxemia but also postoperative acute lung injury (ALI): protective ventilation strategies are therefore to be implemented. Locoregional techniques should be favored over intravenous analgesia: the thoracic epidural, the paravertebral block (PVB), the intercostal nerve block (ICNB), and the serratus anterior plane block (SAPB) are thoroughly reviewed and the most common dosages are reported. Fluid therapy needs to be administered critically, to avoid both overload and cardiovascular compromisation. All these practices are analyzed singularly with the aid of the most recent evidences aimed at the best patient care. Finally, a few notes on some of the latest trends in research are presented, such as non-intubated video-assisted thoracoscopic surgery (VATS) and intravenous lidocaine.

  15. Tracking Persons-of-Interest via Unsupervised Representation Adaptation

    OpenAIRE

    Zhang, Shun; Huang, Jia-Bin; Lim, Jongwoo; Gong, Yihong; Wang, Jinjun; Ahuja, Narendra; Yang, Ming-Hsuan

    2017-01-01

    Multi-face tracking in unconstrained videos is a challenging problem as faces of one person often appear drastically different in multiple shots due to significant variations in scale, pose, expression, illumination, and make-up. Existing multi-target tracking methods often use low-level features which are not sufficiently discriminative for identifying faces with such large appearance variations. In this paper, we tackle this problem by learning discriminative, video-specific face representa...

  16. Video System for Viewing From a Remote or Windowless Cockpit

    Science.gov (United States)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  17. Feasibility study of using the RoboEarth cloud engine for rapid mapping and tracking with small unmanned aerial systems

    Science.gov (United States)

    Li-Chee-Ming, J.; Armenakis, C.

    2014-11-01

    This paper presents the ongoing development of a small unmanned aerial mapping system (sUAMS) that in the future will track its trajectory and perform 3D mapping in near-real time. As both mapping and tracking algorithms require powerful computational capabilities and large data storage facilities, we propose to use the RoboEarth Cloud Engine (RCE) to offload heavy computation and store data to secure computing environments in the cloud. While the RCE's capabilities have been demonstrated with terrestrial robots in indoor environments, this paper explores the feasibility of using the RCE in mapping and tracking applications in outdoor environments by small UAMS. The experiments presented in this work assess the data processing strategies and evaluate the attainable tracking and mapping accuracies using the data obtained by the sUAMS. Testing was performed with an Aeryon Scout quadcopter. It flew over York University, up to approximately 40 metres above the ground. The quadcopter was equipped with a single-frequency GPS receiver providing positioning to about 3 meter accuracies, an AHRS (Attitude and Heading Reference System) estimating the attitude to about 3 degrees, and an FPV (First Person Viewing) camera. Video images captured from the onboard camera were processed using VisualSFM and SURE, which are being reformed as an Application-as-a-Service via the RCE. The 3D virtual building model of York University was used as a known environment to georeference the point cloud generated from the sUAMS' sensor data. The estimated position and orientation parameters of the video camera show increases in accuracy when compared to the sUAMS' autopilot solution, derived from the onboard GPS and AHRS. The paper presents the proposed approach and the results, along with their accuracies.

  18. Investigation of tracking systems properties in CAVE-type virtual reality systems

    Science.gov (United States)

    Szymaniak, Magda; Mazikowski, Adam; Meironke, Michał

    2017-08-01

    In recent years, many scientific and industrial centers in the world developed a virtual reality systems or laboratories. One of the most advanced solutions are Immersive 3D Visualization Lab (I3DVL), a CAVE-type (Cave Automatic Virtual Environment) laboratory. It contains two CAVE-type installations: six-screen installation arranged in a form of a cube, and four-screen installation, a simplified version of the previous one. The user feeling of "immersion" and interaction with virtual world depend on many factors, in particular on the accuracy of the tracking system of the user. In this paper properties of the tracking systems applied in I3DVL was investigated. For analysis two parameters were selected: the accuracy of the tracking system and the range of detection of markers by the tracking system in space of the CAVE. Measurements of system accuracy were performed for six-screen installation, equipped with four tracking cameras for three axes: X, Y, Z. Rotation around the Y axis was also analyzed. Measured tracking system shows good linear and rotating accuracy. The biggest issue was the range of the monitoring of markers inside the CAVE. It turned out, that the tracking system lose sight of the markers in the corners of the installation. For comparison, for a simplified version of CAVE (four-screen installation), equipped with eight tracking cameras, this problem was not occur. Obtained results will allow for improvement of cave quality.

  19. COMBINING INDEPENDENT VISUALIZATION AND TRACKING SYSTEMS FOR AUGMENTED REALITY

    Directory of Open Access Journals (Sweden)

    P. Hübner

    2018-05-01

    Full Text Available The basic requirement for the successful deployment of a mobile augmented reality application is a reliable tracking system with high accuracy. Recently, a helmet-based inside-out tracking system which meets this demand has been proposed for self-localization in buildings. To realize an augmented reality application based on this tracking system, a display has to be added for visualization purposes. Therefore, the relative pose of this visualization platform with respect to the helmet has to be tracked. In the case of hand-held visualization platforms like smartphones or tablets, this can be achieved by means of image-based tracking methods like marker-based or model-based tracking. In this paper, we present two marker-based methods for tracking the relative pose between the helmet-based tracking system and a tablet-based visualization system. Both methods were implemented and comparatively evaluated in terms of tracking accuracy. Our results show that mobile inside-out tracking systems without integrated displays can easily be supplemented with a hand-held tablet as visualization device for augmented reality purposes.

  20. Vehicle Tracking System, Vehicle Infrastructure Provided with Vehicle Tracking System and Method for Tracking

    NARCIS (Netherlands)

    Papp, Z.; Doodeman, G.J.N.; Nelisse, M.W.; Sijs, J.; Theeuwes, J.A.C.; Driessen, B.J.F.

    2010-01-01

    A vehicle tracking system is described comprising - a plurality of sensor nodes (10) that each provide a message (D) indicative for an occupancy status of a detection area of an vehicle infrastructure monitored by said sensor node, said sensor nodes (10) being arranged in the vehicle infrastructure

  1. A distributed database view of network tracking systems

    Science.gov (United States)

    Yosinski, Jason; Paffenroth, Randy

    2008-04-01

    In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.

  2. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias; Smith, Neil; Ghanem, Bernard

    2016-01-01

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  3. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  4. Video Conference System that Keeps Mutual Eye Contact Among Participants

    Directory of Open Access Journals (Sweden)

    Masahiko Yahagi

    2011-10-01

    Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.

  5. Combined kV and MV imaging for real-time tracking of implanted fiducial markers

    International Nuclear Information System (INIS)

    Wiersma, R. D.; Mao Weihua; Xing, L.

    2008-01-01

    In the presence of intrafraction organ motion, target localization uncertainty can greatly hamper the advantage of highly conformal dose techniques such as intensity modulated radiation therapy (IMRT). To minimize the adverse dosimetric effect caused by tumor motion, a real-time knowledge of the tumor position is required throughout the beam delivery process. The recent integration of onboard kV diagnostic imaging together with MV electronic portal imaging devices on linear accelerators can allow for real-time three-dimensional (3D) tumor position monitoring during a treatment delivery. The aim of this study is to demonstrate a near real-time 3D internal fiducial tracking system based on the combined use of kV and MV imaging. A commercially available radiotherapy system equipped with both kV and MV imaging systems was used in this work. A hardware video frame grabber was used to capture both kV and MV video streams simultaneously through independent video channels at 30 frames per second. The fiducial locations were extracted from the kV and MV images using a software tool. The geometric tracking capabilities of the system were evaluated using a pelvic phantom with embedded fiducials placed on a moveable stage. The maximum tracking speed of the kV/MV system is approximately 9 Hz, which is primarily limited by the frame rate of the MV imager. The geometric accuracy of the system is found to be on the order of less than 1 mm in all three spatial dimensions. The technique requires minimal hardware modification and is potentially useful for image-guided radiation therapy systems

  6. An Improved Mixture-of-Gaussians Background Model with Frame Difference and Blob Tracking in Video Stream

    Directory of Open Access Journals (Sweden)

    Li Yao

    2014-01-01

    Full Text Available Modeling background and segmenting moving objects are significant techniques for computer vision applications. Mixture-of-Gaussians (MoG background model is commonly used in foreground extraction in video steam. However considering the case that the objects enter the scenery and stay for a while, the foreground extraction would fail as the objects stay still and gradually merge into the background. In this paper, we adopt a blob tracking method to cope with this situation. To construct the MoG model more quickly, we add frame difference method to the foreground extracted from MoG for very crowded situations. What is more, a new shadow removal method based on RGB color space is proposed.

  7. Specialized video systems for use in underground storage tanks

    International Nuclear Information System (INIS)

    Heckendom, F.M.; Robinson, C.W.; Anderson, E.K.; Pardini, A.F.

    1994-01-01

    The Robotics Development Groups at the Savannah River Site and the Hanford site have developed remote video and photography systems for deployment in underground radioactive waste storage tanks at Department of Energy (DOE) sites as a part of the Office of Technology Development (OTD) program within DOE. Figure 1 shows the remote video/photography systems in a typical underground storage tank environment. Viewing and documenting the tank interiors and their associated annular spaces is an extremely valuable tool in characterizing their condition and contents and in controlling their remediation. Several specialized video/photography systems and robotic End Effectors have been fabricated that provide remote viewing and lighting. All are remotely deployable into and from the tank, and all viewing functions are remotely operated. Positioning all control components away from the facility prevents the potential for personnel exposure to radiation and contamination. Overview video systems, both monaural and stereo versions, include a camera, zoom lens, camera positioner, vertical deployment system, and positional feedback. Each independent video package can be inserted through a 100 mm (4 in.) diameter opening. A special attribute of these packages is their design to never get larger than the entry hole during operation and to be fully retrievable. The End Effector systems will be deployed on the large robotic Light Duty Utility Arm (LDUA) being developed by other portions of the OTD-DOE programs. The systems implement a multi-functional ''over the coax'' design that uses a single coaxial cable for all data and control signals over the more than 900 foot cable (or fiber optic) link

  8. Video auto stitching in multicamera surveillance system

    Science.gov (United States)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  9. Real Property Project Tracking System (RPPTS)

    Data.gov (United States)

    Department of Veterans Affairs — The Real Property Project Tracking System (RPPTS), formerly known as the Lease/Project Tracking (LEASE) database, contains information about lease and land projects...

  10. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  11. Video Content Search System for Better Students Engagement in the Learning Process

    Directory of Open Access Journals (Sweden)

    Alanoud Alotaibi

    2014-12-01

    Full Text Available As a component of the e-learning educational process, content plays an essential role. Increasingly, the video-recorded lectures in e-learning systems are becoming more important to learners. In most cases, a single video-recorded lecture contains more than one topic or sub-topic. Therefore, to enable learners to find the desired topic and reduce learning time, e-learning systems need to provide a search capability for searching within the video content. This can be accomplished by enabling learners to identify the video or portion that contains a keyword they are looking for. This research aims to develop Video Content Search system to facilitate searching in educational videos and its contents. Preliminary results of an experimentation were conducted on a selected university course. All students needed a system to avoid time-wasting problem of watching long videos with no significant benefit. The statistics showed that the number of learners increased during the experiment. Future work will include studying impact of VCS system on students’ performance and satisfaction.

  12. Hybrid compression of video with graphics in DTV communication systems

    OpenAIRE

    Schaar, van der, M.; With, de, P.H.N.

    2000-01-01

    Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an efficient transmission/storage of these mixed video and graphics signals and, at this emerging stage of DTV systems, introducing new standards is not desired. Nevertheless, in the professional video...

  13. Facial expression system on video using widrow hoff

    Science.gov (United States)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  14. A Secure and Robust Object-Based Video Authentication System

    Directory of Open Access Journals (Sweden)

    He Dajun

    2004-01-01

    Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.

  15. MTB-USDH Compensation Tracking System (MTB-CTS)

    Data.gov (United States)

    US Agency for International Development — MTB-USDH Compensation Tracking System: is the USDH Compensation Tracking System (MTB-CTS) to assist managers in monitoring their payroll costs for U.S. direct hires....

  16. Automatic video surveillance of outdoor scenes using track before detect

    DEFF Research Database (Denmark)

    Hansen, Morten; Sørensen, Helge Bjarup Dissing; Birkemark, Christian M.

    2005-01-01

    This paper concerns automatic video surveillance of outdoor scenes using a single camera. The first step in automatic interpretation of the video stream is activity detection based on background subtraction. Usually, this process will generate a large number of false alarms in outdoor scenes due...

  17. Flight Activity and Crew Tracking System -

    Data.gov (United States)

    Department of Transportation — The Flight Activity and Crew Tracking System (FACTS) is a Web-based application that provides an overall management and tracking tool of FAA Airmen performing Flight...

  18. A Retrieval Optimized Surveillance Video Storage System for Campus Application Scenarios

    Directory of Open Access Journals (Sweden)

    Shengcheng Ma

    2018-01-01

    Full Text Available This paper investigates and analyzes the characteristics of video data and puts forward a campus surveillance video storage system with the university campus as the specific application environment. Aiming at the challenge that the content-based video retrieval response time is too long, the key-frame index subsystem is designed. The key frame of the video can reflect the main content of the video. Extracted from the video, key frames are associated with the metadata information to establish the storage index. The key-frame index is used in lookup operations while querying. This method can greatly reduce the amount of video data reading and effectively improves the query’s efficiency. From the above, we model the storage system by a stochastic Petri net (SPN and verify the promotion of query performance by quantitative analysis.

  19. Thinking Tracks for Multidisciplinary System Design

    Directory of Open Access Journals (Sweden)

    Gerrit Maarten Bonnema

    2016-11-01

    Full Text Available Systems engineering is, for a large part, a process description of how to bring new systems to existence. It is valuable as it directs the development effort. Tools exist that can be used in this process. System analysis investigates existing and/or desired situations. However, how to create a system that instantiates the desired situation depends significantly on human creativity and insight; the required human trait here is commonly called systems thinking. In literature, this trait is regularly used, but information on how to do systems thinking is scarce. Therefore, we have introduced earlier twelve thinking tracks that are concrete and help system designers to make an optimal fit between the system under design, the identified issue, the user, the environment and the rest of the world. The paper provides the scientific rationale for the thinking tracks based on literature. Secondly, the paper presents three cases of application, leading to the conclusion that the tracks are usable and effective.

  20. An adaptive approach to human motion tracking from video

    Science.gov (United States)

    Wu, Lifang; Chen, Chang Wen

    2010-07-01

    Vision based human motion tracking has drawn considerable interests recently because of its extensive applications. In this paper, we propose an approach to tracking the body motion of human balancing on each foot. The ability to balance properly is an important indication of neurological condition. Comparing with many other human motion tracking, there is much less occlusion in human balancing tracking. This less constrained problem allows us to combine a 2D model of human body with image analysis techniques to develop an efficient motion tracking algorithm. First we define a hierarchical 2D model consisting of six components including head, body and four limbs. Each of the four limbs involves primary component (upper arms and legs) and secondary component (lower arms and legs) respectively. In this model, we assume each of the components can be represented by quadrangles and every component is connected to one of others by a joint. By making use of inherent correlation between different components, we design a top-down updating framework and an adaptive algorithm with constraints of foreground regions for robust and efficient tracking. The approach has been tested using the balancing movement in HumanEva-I/II dataset. The average tracking time is under one second, which is much shorter than most of current schemes.

  1. Upgrade of the ALICE Inner Tracking System

    OpenAIRE

    Reidt, Felix; Collaboration, for the ALICE

    2014-01-01

    During the Long Shutdown 2 of the LHC in 2018/2019, the ALICE experiment plans the installation of a novel Inner Tracking System. It will replace the current six layer detector system with a seven layer detector using Monolithic Active Pixel Sensors. The upgraded Inner Tracking System will have significantly improved tracking and vertexing capabilities, as well as readout rate to cope with the expected increased Pb-Pb luminosity of the LHC. The choice of Monolithic Active Pixel Sensors has be...

  2. Hybrid three-dimensional and support vector machine approach for automatic vehicle tracking and classification using a single camera

    Science.gov (United States)

    Kachach, Redouane; Cañas, José María

    2016-05-01

    Using video in traffic monitoring is one of the most active research domains in the computer vision community. TrafficMonitor, a system that employs a hybrid approach for automatic vehicle tracking and classification on highways using a simple stationary calibrated camera, is presented. The proposed system consists of three modules: vehicle detection, vehicle tracking, and vehicle classification. Moving vehicles are detected by an enhanced Gaussian mixture model background estimation algorithm. The design includes a technique to resolve the occlusion problem by using a combination of two-dimensional proximity tracking algorithm and the Kanade-Lucas-Tomasi feature tracking algorithm. The last module classifies the shapes identified into five vehicle categories: motorcycle, car, van, bus, and truck by using three-dimensional templates and an algorithm based on histogram of oriented gradients and the support vector machine classifier. Several experiments have been performed using both real and simulated traffic in order to validate the system. The experiments were conducted on GRAM-RTM dataset and a proper real video dataset which is made publicly available as part of this work.

  3. Audit Follow-up Tracking System (AFTS)

    Data.gov (United States)

    Office of Personnel Management — The Audit Follow-up Tracking System (AFTS) is used to track, monitor, and report on audits and open recommendations of the U.S. Office of Personnel Management (OPM)...

  4. A real-time remote video streaming platform for ultrasound imaging.

    Science.gov (United States)

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  5. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  6. Decontamination and Decommissioning Equipment Tracking System (DDETS)

    International Nuclear Information System (INIS)

    Cook, S.

    1994-07-01

    At the request of the Department of Energy (DOE)(EM-50), the Scientific Computing Unit developed a prototype system to track information and data relevant to equipment and tooling removed during decontamination and decommissioning activities. The DDETS proof-of-concept tracking system utilizes a one-dimensional (1D) and two-dimensional (2D) bar coding technology to retain and track information such as identification number, manufacturer, requisition information, and various contaminant information, etc. The information is encoded in a bar code, printed on a label and can be attached to corresponding equipment. The DDETS was developed using a proven relational database management system which allows the addition, modification, printing, and deletion of data. In addition, communication interfaces with bar code printers and bar code readers were developed. Additional features of the system include: (a) Four different reports available for the user (REAPS, transaction, and two inventory), (b) Remote automated inventory tracking capabilities, (c) Remote automated inventory tracking capability (2D bar codes allow equipment to be scanned/tracked without being linked to the DDETS database), (d) Edit, update, delete, and query capabilities, (e) On-line bar code label printing utility (data from 2D bar codes can be scanned directly into the data base simplifying data entry), and (f) Automated data backup utility. Compatibility with the Reportable Excess Automated Property System (REAPS) to upload data from DDETS is planned

  7. Real-time geo-referenced video mosaicking with the MATISSE system

    DEFF Research Database (Denmark)

    Vincent, Anne-Gaelle; Pessel, Nathalie; Borgetto, Manon

    This paper presents the MATISSE system: Mosaicking Advanced Technologies Integrated in a Single Software Environment. This system aims at producing in-line and off-line geo-referenced video mosaics of seabed given a video input and navigation data. It is based upon several techniques of image...

  8. DCS Budget Tracking System

    Data.gov (United States)

    Social Security Administration — DCS Budget Tracking System database contains budget information for the Information Technology budget and the 'Other Objects' budget. This data allows for monitoring...

  9. Tracking System : Suaineadh satellite experiment

    OpenAIRE

    Brengesjö, Carl; Selin, Martine

    2011-01-01

    The purpose of this bachelor thesis is to present a tracking system for the Suaineadh satellite experiment. The experiment is a part of the REXUS (Rocket EXperiments for University Students) program and the objective is to deploy a foldable web in space. The assignment of this thesis is to develop a tracking system to find the parts from the Suaineadh experiment that will land on Earth. It is important to find the parts and recover all the data that the experiment performed during the travel ...

  10. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    Science.gov (United States)

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  11. Pollen Bearing Honey Bee Detection in Hive Entrance Video Recorded by Remote Embedded System for Pollination Monitoring

    Science.gov (United States)

    Babic, Z.; Pilipovic, R.; Risojevic, V.; Mirjanic, G.

    2016-06-01

    Honey bees have crucial role in pollination across the world. This paper presents a simple, non-invasive, system for pollen bearing honey bee detection in surveillance video obtained at the entrance of a hive. The proposed system can be used as a part of a more complex system for tracking and counting of honey bees with remote pollination monitoring as a final goal. The proposed method is executed in real time on embedded systems co-located with a hive. Background subtraction, color segmentation and morphology methods are used for segmentation of honey bees. Classification in two classes, pollen bearing honey bees and honey bees that do not have pollen load, is performed using nearest mean classifier, with a simple descriptor consisting of color variance and eccentricity features. On in-house data set we achieved correct classification rate of 88.7% with 50 training images per class. We show that the obtained classification results are not far behind from the results of state-of-the-art image classification methods. That favors the proposed method, particularly having in mind that real time video transmission to remote high performance computing workstation is still an issue, and transfer of obtained parameters of pollination process is much easier.

  12. Reasonable Accommodation Information Tracking System

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reasonable Accommodation Information Tracking System (RAITS) is a case management system that allows the National Reasonable Accommodation Coordinator (NRAC) and...

  13. Fish4Knowledge collecting and analyzing massive coral reef fish video data

    CERN Document Server

    Chen-Burger, Yun-Heh; Giordano, Daniela; Hardman, Lynda; Lin, Fang-Pang

    2016-01-01

    This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and tracking, fish species recognition and analysis, a large SQL database to record the results and an efficient retrieval mechanism. Novel user interface mechanisms were developed to provide easy access for marine ecologists, who wanted to explore the dataset. The book is a useful resource for system builders, as it gives an overview of the many new methods that were created to build the Fish4Knowledge system in a manner that also allows readers to see ho...

  14. Matter Tracking Information System -

    Data.gov (United States)

    Department of Transportation — The Matter Tracking Information System (MTIS) principle function is to streamline and integrate the workload and work activity generated or addressed by our 300 plus...

  15. Particle displacement tracking for PIV

    Science.gov (United States)

    Wernet, Mark P.

    1990-01-01

    A new Particle Imaging Velocimetry (PIV) data acquisition and analysis system, which is an order of magnitude faster than any previously proposed system has been constructed and tested. The new Particle Displacement Tracing (PDT) system is an all electronic technique employing a video camera and a large memory buffer frame-grabber board. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine velocity vectors. Application of the PDT technique to a counter-rotating vortex flow produced over 1100 velocity vectors in 110 seconds when processed on an 80386 PC.

  16. BEEtag: A Low-Cost, Image-Based Tracking System for the Study of Animal Behavior and Locomotion.

    Directory of Open Access Journals (Sweden)

    James D Crall

    Full Text Available A fundamental challenge common to studies of animal movement, behavior, and ecology is the collection of high-quality datasets on spatial positions of animals as they change through space and time. Recent innovations in tracking technology have allowed researchers to collect large and highly accurate datasets on animal spatiotemporal position while vastly decreasing the time and cost of collecting such data. One technique that is of particular relevance to the study of behavioral ecology involves tracking visual tags that can be uniquely identified in separate images or movie frames. These tags can be located within images that are visually complex, making them particularly well suited for longitudinal studies of animal behavior and movement in naturalistic environments. While several software packages have been developed that use computer vision to identify visual tags, these software packages are either (a not optimized for identification of single tags, which is generally of the most interest for biologists, or (b suffer from licensing issues, and therefore their use in the study of animal behavior has been limited. Here, we present BEEtag, an open-source, image-based tracking system in Matlab that allows for unique identification of individual animals or anatomical markers. The primary advantages of this system are that it (a independently identifies animals or marked points in each frame of a video, limiting error propagation, (b performs well in images with complex backgrounds, and (c is low-cost. To validate the use of this tracking system in animal behavior, we mark and track individual bumblebees (Bombus impatiens and recover individual patterns of space use and activity within the nest. Finally, we discuss the advantages and limitations of this software package and its application to the study of animal movement, behavior, and ecology.

  17. Multiple-camera tracking: UK government requirements

    Science.gov (United States)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  18. Vehicle Detection with Occlusion Handling, Tracking, and OC-SVM Classification: A High Performance Vision-Based System

    Science.gov (United States)

    Velazquez-Pupo, Roxana; Sierra-Romero, Alberto; Torres-Roman, Deni; Shkvarko, Yuriy V.; Romero-Delgado, Misael

    2018-01-01

    This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles. PMID:29382078

  19. A video wireless capsule endoscopy system powered wirelessly: design, analysis and experiment

    International Nuclear Information System (INIS)

    Pan, Guobing; Chen, Jiaoliao; Xin, Wenhui; Yan, Guozheng

    2011-01-01

    Wireless capsule endoscopy (WCE), as a relatively new technology, has brought about a revolution in the diagnosis of gastrointestinal (GI) tract diseases. However, the existing WCE systems are not widely applied in clinic because of the low frame rate and low image resolution. A video WCE system based on a wireless power supply is developed in this paper. This WCE system consists of a video capsule endoscope (CE), a wireless power transmission device, a receiving box and an image processing station. Powered wirelessly, the video CE has the abilities of imaging the GI tract and transmitting the images wirelessly at a frame rate of 30 frames per second (f/s). A mathematical prototype was built to analyze the power transmission system, and some experiments were performed to test the capability of energy transferring. The results showed that the wireless electric power supply system had the ability to transfer more than 136 mW power, which was enough for the working of a video CE. In in vitro experiments, the video CE produced clear images of the small intestine of a pig with the resolution of 320 × 240, and transmitted NTSC format video outside the body. Because of the wireless power supply, the video WCE system with high frame rate and high resolution becomes feasible, and provides a novel solution for the diagnosis of the GI tract in clinic

  20. A Video Game Platform for Exploring Satellite and In-Situ Data Streams

    Science.gov (United States)

    Cai, Y.

    2014-12-01

    Exploring spatiotemporal patterns of moving objects are essential to Earth Observation missions, such as tracking, modeling and predicting movement of clouds, dust, plumes and harmful algal blooms. Those missions involve high-volume, multi-source, and multi-modal imagery data analysis. Analytical models intend to reveal inner structure, dynamics, and relationship of things. However, they are not necessarily intuitive to humans. Conventional scientific visualization methods are intuitive but limited by manual operations, such as area marking, measurement and alignment of multi-source data, which are expensive and time-consuming. A new development of video analytics platform has been in progress, which integrates the video game engine with satellite and in-situ data streams. The system converts Earth Observation data into articulated objects that are mapped from a high-dimensional space to a 3D space. The object tracking and augmented reality algorithms highlight the objects' features in colors, shapes and trajectories, creating visual cues for observing dynamic patterns. The head and gesture tracker enable users to navigate the data space interactively. To validate our design, we have used NASA SeaWiFS satellite images of oceanographic remote sensing data and NOAA's in-situ cell count data. Our study demonstrates that the video game system can reduce the size and cost of traditional CAVE systems in two to three orders of magnitude. This system can also be used for satellite mission planning and public outreaching.

  1. Guide to Synchronization of Video Systems to IRIG Timing

    Science.gov (United States)

    1992-07-01

    and industry. 1-2 CHAPTER 2 SYNCHRONISATION Before delving into the details of synchronization , a review is needed of the reasons for synchronizing ... Synchronization of Video Systems to IRIG Timing Optical Systems Group Range Commanders Council White Sands Missile Range, NM 88002-5110 RCC Document 456-92 Range...This document addresses a broad field of video synchronization to IRIG timing with emphasis on color synchronization . This document deals with

  2. Evolution of the SOFIA tracking control system

    Science.gov (United States)

    Fiebig, Norbert; Jakob, Holger; Pfüller, Enrico; Röser, Hans-Peter; Wiedemann, Manuel; Wolf, Jürgen

    2014-07-01

    The airborne observatory SOFIA (Stratospheric Observatory for Infrared Astronomy) is undergoing a modernization of its tracking system. This included new, highly sensitive tracking cameras, control computers, filter wheels and other equipment, as well as a major redesign of the control software. The experiences along the migration path from an aged 19" VMbus based control system to the application of modern industrial PCs, from VxWorks real-time operating system to embedded Linux and a state of the art software architecture are presented. Further, the concept is presented to operate the new camera also as a scientific instrument, in parallel to tracking.

  3. Developing a multipurpose sun tracking system using fuzzy control

    Energy Technology Data Exchange (ETDEWEB)

    Alata, Mohanad [Department of Mechanical Engineering, Jordan University of Science and Technology (JUST), PO Box 3030, Irbid 22110 (Jordan)]. E-mail: alata@just.edu.jo; Al-Nimr, M.A. [Department of Mechanical Engineering, Jordan University of Science and Technology (JUST), PO Box 3030, Irbid 22110 (Jordan); Qaroush, Yousef [Department of Mechanical Engineering, Jordan University of Science and Technology (JUST), PO Box 3030, Irbid 22110 (Jordan)

    2005-05-01

    The present work demonstrates the design and simulation of time controlled step sun tracking systems that include: one axis sun tracking with the tilted aperture equal to the latitude angle, equatorial two axis sun tracking and azimuth/elevation sun tracking. The first order Sugeno fuzzy inference system is utilized for modeling and controller design. In addition, an estimation of the insolation incident on a two axis sun tracking system is determined by fuzzy IF-THEN rules. The approach starts by generating the input/output data. Then, the subtractive clustering algorithm, along with least square estimation (LSE), generates the fuzzy rules that describe the relationship between the input/output data of solar angles that change with time. The fuzzy rules are tuned by an adaptive neuro-fuzzy inference system (ANFIS). Finally, an open loop control system is designed for each of the previous types of sun tracking systems. The results are shown using simulation and virtual reality. The site of application is chosen at Amman, Jordan (32 deg. North, 36 deg. East), and the period of controlling and simulating each type of tracking system is the year 2003.

  4. The Habituation/Cross-Habituation Test Revisited: Guidance from Sniffing and Video Tracking

    Directory of Open Access Journals (Sweden)

    G. Coronas-Samano

    2016-01-01

    Full Text Available The habituation/cross-habituation test (HaXha is a spontaneous odor discrimination task that has been used for many decades to evaluate olfactory function in animals. Animals are presented repeatedly with the same odorant after which a new odorant is introduced. The time the animal explores the odor object is measured. An animal is considered to cross-habituate during the novel stimulus trial when the exploration time is higher than the prior trial and indicates the degree of olfactory patency. On the other hand, habituation across the repeated trials involves decreased exploration time and is related to memory patency, especially at long intervals. Classically exploration is timed using a stopwatch when the animal is within 2 cm of the object and aimed toward it. These criteria are intuitive, but it is unclear how they relate to olfactory exploration, that is, sniffing. We used video tracking combined with plethysmography to improve accuracy, avoid observer bias, and propose more robust criteria for exploratory scoring when sniff measures are not available. We also demonstrate that sniff rate combined with proximity is the most direct measure of odorant exploration and provide a robust and sensitive criterion.

  5. Mobile Tracking Systems Using Meter Class Reflective Telescopes

    Science.gov (United States)

    Sturzenbecher, K.; Ehrhorn, B.

    This paper is a discussion on the use of large reflective telescopes on mobile tracking systems with modern instrument control systems. Large optics can be defined as reflective telescopes with an aperture of at least 20 inches in diameter. New carbon composite construction techniques allow for larger, stronger, and lighter telescopes ranging from 240 pounds for a 20 inch, to 800 pounds for a 32 inch, making them ideal for mobile tracking systems. These telescopes have better light gathering capability and produce larger images with greater detail at a longer range than conventional refractive lenses. In a mobile configuration these systems provide the ability to move the observation platform to the optimal location anywhere in the world. Mounting and systems integration - We will discuss how large telescopes can be physically fit to the mobile tracking system and the integration with the tracking systems' digital control system. We will highlight the remote control capabilities. We will discuss special calibration techniques available in a modern instrument control system such as star calibration, calibration of sensors. Tracking Performance - We will discuss the impact of using large telescopes on the performance of the mobile tracking system. We will highlight the capabilities for auto-tracking and sidereal rate tracking in a mobile mount. Large optics performance - We will discuss the advantages of two-mirror Ritchey-Chrétien reflective optics which offer in-focus imaging across the spectrum, from visible to Long Wave Infrared. These zero expansion optics won't lose figure or focus during temperature changes. And the carbon composite telescope tube is thermally inert. The primary mirror is a modern lightweight "dish" mirror for low thermal mass and is center supported/self balancing. Applications - We will discuss Visible - IR Imaging requirements, Optical Rangefinders, and capabilities for special filters to increase resolution in difficult conditions such as

  6. Tumor tracking and motion compensation with an adaptive tumor tracking system (ATTS): System description and prototype testing

    International Nuclear Information System (INIS)

    Wilbert, Juergen; Meyer, Juergen; Baier, Kurt; Guckenberger, Matthias; Herrmann, Christian; Hess, Robin; Janka, Christian; Ma Lei; Mersebach, Torben; Richter, Anne; Roth, Michael; Schilling, Klaus; Flentje, Michael

    2008-01-01

    A novel system for real-time tumor tracking and motion compensation with a robotic HexaPOD treatment couch is described. The approach is based on continuous tracking of the tumor motion in portal images without implanted fiducial markers, using the therapeutic megavoltage beam, and tracking of abdominal breathing motion with optical markers. Based on the two independently acquired data sets the table movements for motion compensation are calculated. The principle of operation of the entire prototype system is detailed first. In the second part the performance of the HexaPOD couch was investigated with a robotic four-dimensional-phantom capable of simulating real patient tumor trajectories in three-dimensional space. The performance and limitations of the HexaPOD table and the control system were characterized in terms of its dynamic behavior. The maximum speed and acceleration of the HexaPOD were 8 mm/s and 34.5 mm/s 2 in the lateral direction, and 9.5 mm/s and 29.5 mm/s 2 in longitudinal and anterior-posterior direction, respectively. Base line drifts of the mean tumor position of realistic lung tumor trajectories could be fully compensated. For continuous tumor tracking and motion compensation a reduction of tumor motion up to 68% of the original amplitude was achieved. In conclusion, this study demonstrated that it is technically feasible to compensate breathing induced tumor motion in the lung with the adaptive tumor tracking system

  7. Systems for tracking minimally invasive surgical instruments.

    Science.gov (United States)

    Chmarra, M K; Grimbergen, C A; Dankelman, J

    2007-01-01

    Minimally invasive surgery (e.g. laparoscopy) requires special surgical skills, which should be objectively assessed. Several studies have shown that motion analysis is a valuable assessment tool of basic surgical skills in laparoscopy. However, to use motion analysis as the assessment tool, it is necessary to track and record the motions of laparoscopic instruments. This article describes the state of the art in research on tracking systems for laparoscopy. It gives an overview on existing systems, on how these systems work, their advantages, and their shortcomings. Although various approaches have been used, none of the tracking systems to date comes out as clearly superior. A great number of systems can be used in training environment only, most systems do not allow the use of real laparoscopic instruments, and only a small number of systems provide force feedback.

  8. Advanced Alignment of the ATLAS Tracking System

    CERN Document Server

    Pedraza Lopez, S; The ATLAS collaboration

    2012-01-01

    In order to reconstruct trajectories of charged particles, ATLAS is equipped with a tracking system built using different technologies embedded in a 2T solenoidal magnetic field. ATLAS physics goals require high resolution, unbiased measurement of all charged particle kinematic parameters in order to assure accurate invariant mass reconstruction and interaction and decay vertex finding. These critically depend on the systematic effects related to the alignment of the tracking system. In order to eliminate malicious systematic deformations, various advanced tools and techniques have been put in place. These include information from known mass resonances, energy of electrons and positrons measured by the electromagnetic calorimeters, etc. Despite being stable under normal running conditions, ATLAS tracking system responses to sudden environ-mental changes (temperature, magnetic field) by small collective deformations. These have to be identified and corrected in order to assure uniform, highest quality tracking...

  9. Video Tracking Protocol to Screen Deterrent Chemistries for Honey Bees.

    Science.gov (United States)

    Larson, Nicholas R; Anderson, Troy D

    2017-06-12

    The European honey bee, Apis mellifera L., is an economically and agriculturally important pollinator that generates billions of dollars annually. Honey bee colony numbers have been declining in the United States and many European countries since 1947. A number of factors play a role in this decline, including the unintentional exposure of honey bees to pesticides. The development of new methods and regulations are warranted to reduce pesticide exposures to these pollinators. One approach is the use of repellent chemistries that deter honey bees from a recently pesticide-treated crop. Here, we describe a protocol to discern the deterrence of honey bees exposed to select repellent chemistries. Honey bee foragers are collected and starved overnight in an incubator 15 h prior to testing. Individual honey bees are placed into Petri dishes that have either a sugar-agarose cube (control treatment) or sugar-agarose-compound cube (repellent treatment) placed into the middle of the dish. The Petri dish serves as the arena that is placed under a camera in a light box to record the honey bee locomotor activities using video tracking software. A total of 8 control and 8 repellent treatments were analyzed for a 10 min period with each treatment was duplicated with new honey bees. Here, we demonstrate that honey bees are deterred from the sugar-agarose cubes with a compound treatment whereas honey bees are attracted to the sugar-agarose cubes without an added compound.

  10. Procurement Tracking System (PTS)

    Data.gov (United States)

    Office of Personnel Management — The Procurement Tracking System (PTS) is used solely by the procurement staff of the Office of the Inspector General (OIG) at the U.S. Office of Personnel Management...

  11. Electro-Optical Data Acquisition and Tracking System

    Data.gov (United States)

    Federal Laboratory Consortium — The Electro-Optical Data Acquisition and Tracking System (EDATS) dynamically tracks and measures target signatures. It consists of an instrumentation van integrated...

  12. Optimum Design Of On Grid Pv System Using Tracking System

    Directory of Open Access Journals (Sweden)

    Saeed Mansour

    2015-05-01

    Full Text Available Abstract The fossil fuel is a main issue in the world due to the increase of fossil fuel cost and the depletion of the fossil fuel with continuous increasing demand on electricity. With continuous decrease of PV panels cost it is interesting to consider generation of electricity from PV system. To provide electric energy to a load in a remote area where electric grid utility is not available or connection with grid utility is available there are two approaches of photovoltaic system PV without tracking system Fixed System and PV with tracking systems. The result shows that the energy production by using PV with tracking system generates more energy in comparison with fixed panels system. However the cost per produced KWH is less in case of using fixed panels. This is the backbone in choice between two approaches of photovoltaic system. In this work a system design and cost analysis for two approaches of photovoltaic system are considered.

  13. PageRank tracker: from ranking to tracking.

    Science.gov (United States)

    Gong, Chen; Fu, Keren; Loza, Artur; Wu, Qiang; Liu, Jia; Yang, Jie

    2014-06-01

    Video object tracking is widely used in many real-world applications, and it has been extensively studied for over two decades. However, tracking robustness is still an issue in most existing methods, due to the difficulties with adaptation to environmental or target changes. In order to improve adaptability, this paper formulates the tracking process as a ranking problem, and the PageRank algorithm, which is a well-known webpage ranking algorithm used by Google, is applied. Labeled and unlabeled samples in tracking application are analogous to query webpages and the webpages to be ranked, respectively. Therefore, determining the target is equivalent to finding the unlabeled sample that is the most associated with existing labeled set. We modify the conventional PageRank algorithm in three aspects for tracking application, including graph construction, PageRank vector acquisition and target filtering. Our simulations with the use of various challenging public-domain video sequences reveal that the proposed PageRank tracker outperforms mean-shift tracker, co-tracker, semiboosting and beyond semiboosting trackers in terms of accuracy, robustness and stability.

  14. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    Science.gov (United States)

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  15. 47 CFR 64.1320 - Payphone call tracking system audits.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Payphone call tracking system audits. 64.1320... call tracking system audits. (a) Unless it has entered into an alternative compensation arrangement... Completing Carrier must undergo an audit of its § 64.1310(a)(1) tracking system by an independent third party...

  16. Person and gesture tracking with smart stereo cameras

    Science.gov (United States)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  17. The ASDEX upgrade digital video processing system for real-time machine protection

    Energy Technology Data Exchange (ETDEWEB)

    Drube, Reinhard, E-mail: reinhard.drube@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Neu, Gregor [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard H.; Lüddecke, Klaus [Unlimited Computer Systems GmbH, Seeshaupterstr. 15, 82393 Iffeldorf (Germany); Lunt, Tilmann; Herrmann, Albrecht [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany)

    2013-11-15

    Highlights: • We present the Real-Time Video diagnostic system of ASDEX Upgrade. • We show the implemented image processing algorithms for machine protection. • The way to achieve a robust operating multi-threading Real-Time system is described. -- Abstract: This paper describes the design, implementation, and operation of the Video Real-Time (VRT) diagnostic system of the ASDEX Upgrade plasma experiment and its integration with the ASDEX Upgrade Discharge Control System (DCS). Hot spots produced by heating systems erroneously or accidentally hitting the vessel walls, or from objects in the vessel reaching into the plasma outer border, show up as bright areas in the videos during and after the reaction. A system to prevent damage to the machine by allowing for intervention in a running discharge of the experiment was proposed and implemented. The VRT was implemented on a multi-core real-time Linux system. Up to 16 analog video channels (color and b/w) are acquired and multiple regions of interest (ROI) are processed on each video frame. Detected critical states can be used to initiate appropriate reactions – e.g. gracefully terminate the discharge. The system has been in routine operation since 2007.

  18. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  19. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    Science.gov (United States)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  20. Practical system for generating digital mixed reality video holograms.

    Science.gov (United States)

    Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il

    2016-07-10

    We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.

  1. Can fractal methods applied to video tracking detect the effects of deltamethrin pesticide or mercury on the locomotion behavior of shrimps?

    Science.gov (United States)

    Tenorio, Bruno Mendes; da Silva Filho, Eurípedes Alves; Neiva, Gentileza Santos Martins; da Silva, Valdemiro Amaro; Tenorio, Fernanda das Chagas Angelo Mendes; da Silva, Themis de Jesus; Silva, Emerson Carlos Soares E; Nogueira, Romildo de Albuquerque

    2017-08-01

    Shrimps can accumulate environmental toxicants and suffer behavioral changes. However, methods to quantitatively detect changes in the behavior of these shrimps are still needed. The present study aims to verify whether mathematical and fractal methods applied to video tracking can adequately describe changes in the locomotion behavior of shrimps exposed to low concentrations of toxic chemicals, such as 0.15µgL -1 deltamethrin pesticide or 10µgL -1 mercuric chloride. Results showed no change after 1min, 4, 24, and 48h of treatment. However, after 72 and 96h of treatment, both the linear methods describing the track length, mean speed, mean distance from the current to the previous track point, as well as the non-linear methods of fractal dimension (box counting or information entropy) and multifractal analysis were able to detect changes in the locomotion behavior of shrimps exposed to deltamethrin. Analysis of angular parameters of the track points vectors and lacunarity were not sensitive to those changes. None of the methods showed adverse effects to mercury exposure. These mathematical and fractal methods applicable to software represent low cost useful tools in the toxicological analyses of shrimps for quality of food, water and biomonitoring of ecosystems. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  3. Video integrated measurement system. [Diagnostic display devices

    Energy Technology Data Exchange (ETDEWEB)

    Spector, B.; Eilbert, L.; Finando, S.; Fukuda, F.

    1982-06-01

    A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides an innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.

  4. Utilization of KSC Present Broadband Communications Data System for Digital Video Services

    Science.gov (United States)

    Andrawis, Alfred S.

    2002-01-01

    This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.

  5. Demonstrating EnTracked a System for Energy-Efficient Position Tracking for Mobile Devices

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun; Jensen, Jakob Langdal; Godsk, Torben

    An important feature of a modern mobile device is that it can position itself. Not only for use on the device but also for remote applications that require tracking of the device. To be useful, such position tracking has to be energy-efficient to avoid having a major impact on the battery life...... of the mobile device. To address this challenge we have build a system named EnTracked that, based on the estimation and prediction of system conditions and mobility, schedules position updates to both minimize energy consumption and optimize robustness. In this demonstration we would like to show how...

  6. Enhancing cognition with video games: a multiple game training study.

    Directory of Open Access Journals (Sweden)

    Adam C Oei

    Full Text Available Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands.We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch for one hour a day/five days a week over four weeks (20 hours. Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training.Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.

  7. Enhancing Cognition with Video Games: A Multiple Game Training Study

    Science.gov (United States)

    Oei, Adam C.; Patterson, Michael D.

    2013-01-01

    Background Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands. Methodology/Principal Findings We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training. Conclusion/Significance Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be

  8. Enhancing cognition with video games: a multiple game training study.

    Science.gov (United States)

    Oei, Adam C; Patterson, Michael D

    2013-01-01

    Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands. We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training. Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.

  9. Coupled auralization and virtual video for immersive multimedia displays

    Science.gov (United States)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  10. Risk analysis of a video-surveillance system

    NARCIS (Netherlands)

    Rothkrantz, L.; Lefter, I.

    2011-01-01

    The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are

  11. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    Science.gov (United States)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  12. A novel video recommendation system based on efficient retrieval of human actions

    Science.gov (United States)

    Ramezani, Mohsen; Yaghmaee, Farzin

    2016-09-01

    In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.

  13. Phase tracking system for ultra narrow bandwidth applications

    NARCIS (Netherlands)

    Hill, M.T.; Cantoni, A.

    2002-01-01

    Recent advances make it possible to mitigate a number of drawbacks of conventional phase locked loops. These advances permit the design of phase tracking systems with much improved characteristics that are sought after in modern communication system applications. A new phase tracking system is

  14. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    Science.gov (United States)

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  15. Simulation of an advanced small aperture track system

    Science.gov (United States)

    Williams, Tommy J.; Crockett, Gregg A.; Brunson, Richard L.; Beatty, Brad; Zahirniak, Daniel R.; Deuto, Bernard G.

    2001-08-01

    Simulation development for EO Systems has progressed to new levels with the advent of COTS software tools such as Matlab/Simulink. These tools allow rapid reuse of simulation library routines. We have applied these tools to newly emerging Acquisition Tracking and Pointing (ATP) systems using many routines developed through a legacy to High Energy Laser programs such as AirBorne Laser, Space Based Laser, Tactical High Energy Laser, and The Air Force Research Laboratory projects associated with the Starfire Optical Range. The simulation architecture allows ease in testing various track algorithms under simulated scenes with the ability to rapidly vary system hardware parameters such as track sensor and track loop control systems. The atmospheric turbulence environment and associated optical distortion is simulated to high fidelity levels through the application of an atmospheric phase screen model to produce scintillation of the laser illuminator uplink. The particular ATP system simulated is a small transportable system for tracking satellites in a daytime environment and projects a low power laser and receives laser return from retro-reflector equipped satellites. The primary application of the ATP system (and therefore the simulation) is the determination of the illuminator beam profile, jitter, and scintillation of the low power laser at the satellite. The ATP system will serve as a test bed for satellite tracking in a high background during daytime. Of particular interest in this simulation is the ability to emulate the hardware modelogic within the simulation to test and refine system states and mode change decisions. Additionally, the simulation allows data from the hardware system tests to be imported into Matlab and to thereby drive the simulation or to be easily compared to simulation results.

  16. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine

    2012-11-01

    The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.

  17. Remote gaze tracking system for 3D environments.

    Science.gov (United States)

    Congcong Liu; Herrup, Karl; Shi, Bertram E

    2017-07-01

    Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.

  18. The tracking of high level waste shipments-TRANSCOM system

    International Nuclear Information System (INIS)

    Johnson, P.E.; Joy, D.S.; Pope, R.B.

    1995-01-01

    The TRANSCOM (transportation tracking and communication) system is the U.S. Department of Energy's (DOE's) real-time system for tracking shipments of spent fuel, high-level wastes, and other high-visibility shipments of radioactive material. The TRANSCOM system has been operational since 1988. The system was used during FY1993 to track almost 100 shipments within the US.DOE complex, and it is accessed weekly by 10 to 20 users

  19. The tracking of high level waste shipments - TRANSCOM system

    International Nuclear Information System (INIS)

    Johnson, P.E.; Joy, D.S.; Pope, R.B.; Thomas, T.M.; Lester, P.B.

    1994-01-01

    The TRANSCOM (transportation tracking and communication) system is the US Department of Energy's (DOE's) real-time system for tracking shipments of spent fuel, high-level wastes, and other high-visibility shipments of radioactive material. The TRANSCOM system has been operational since 1988. The system was used during FY 1993 to track almost 100 shipments within the US DOE complex, and it is accessed weekly by 10 to 20 users

  20. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Lacuesta, V; The ATLAS collaboration

    2010-01-01

    ATLAS is a multipurpose experiment that records the LHC collisions. To reconstruct trajectories of charged particles produced in these collisions, ATLAS tracking system is equipped with silicon planar sensors and drift‐tube based detectors. They constitute the ATLAS Inner Detector. In order to achieve its scientific goals, the alignment of the ATLAS tracking system requires the determine accurately its almost 36000 degrees of freedom. Thus the demanded precision for the alignment of the silicon sensors is below 10 micrometers. This implies to use a large sample of high momentum and isolated charge particle tracks. The high level trigger selects those tracks online. Then the raw data with the hits information of the triggered tracks is stored in a calibration stream. Tracks from cosmic trigger during empty LHC bunches are also used as input for the alignment. The implementation of the track based alignment within the ATLAS software framework unifies different alignment approaches and allows the alignment of ...

  1. Multitarget tracking in cluttered environment for a multistatic passive radar system under the DAB/DVB network

    Science.gov (United States)

    Shi, Yi Fang; Park, Seung Hyo; Song, Taek Lyul

    2017-12-01

    The target tracking using multistatic passive radar in a digital audio/video broadcast (DAB/DVB) network with illuminators of opportunity faces two main challenges: the first challenge is that one has to solve the measurement-to-illuminator association ambiguity in addition to the conventional association ambiguity between the measurements and targets, which introduces a significantly complex three-dimensional (3-D) data association problem among the target-measurement illuminator, this is because all the illuminators transmit the same carrier frequency signals and signals transmitted by different illuminators but reflected via the same target become indistinguishable; the other challenge is that only the bistatic range and range-rate measurements are available while the angle information is unavailable or of very poor quality. In this paper, the authors propose a new target tracking algorithm directly in three-dimensional (3-D) Cartesian coordinates with the capability of track management using the probability of target existence as a track quality measure. The proposed algorithm is termed sequential processing-joint integrated probabilistic data association (SP-JIPDA), which applies the modified sequential processing technique to resolve the additional association ambiguity between measurements and illuminators. The SP-JIPDA algorithm sequentially operates the JIPDA tracker to update each track for each illuminator with all the measurements in the common measurement set at each time. For reasons of fair comparison, the existing modified joint probabilistic data association (MJPDA) algorithm that addresses the 3-D data association problem via "supertargets" using gate grouping and provides tracks directly in 3-D Cartesian coordinates, is enhanced by incorporating the probability of target existence as an effective track quality measure for track management. Both algorithms deal with nonlinear observations using the extended Kalman filtering. A simulation study is

  2. Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset

    Science.gov (United States)

    Liu, Qiaoyuan; Wang, Yuru; Yin, Minghao; Ren, Jinchang; Li, Ruizhi

    2017-11-01

    Although various visual tracking algorithms have been proposed in the last 2-3 decades, it remains a challenging problem for effective tracking with fast motion, deformation, occlusion, etc. Under complex tracking conditions, most tracking models are not discriminative and adaptive enough. When the combined feature vectors are inputted to the visual models, this may lead to redundancy causing low efficiency and ambiguity causing poor performance. An effective tracking algorithm is proposed to decontaminate features for each video sequence adaptively, where the visual modeling is treated as an optimization problem from the perspective of evolution. Every feature vector is compared to a biological individual and then decontaminated via classical evolutionary algorithms. With the optimized subsets of features, the "curse of dimensionality" has been avoided while the accuracy of the visual model has been improved. The proposed algorithm has been tested on several publicly available datasets with various tracking challenges and benchmarked with a number of state-of-the-art approaches. The comprehensive experiments have demonstrated the efficacy of the proposed methodology.

  3. MICROCONTROLLER BASED SOLAR-TRACKING SYSTEM AND ITS IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Okan BİNGÖL

    2006-02-01

    Full Text Available In this paper, a new micro-controller based solar-tracking system is proposed, implemented and tested. The scheme presented here can be operated as independent of the geographical location of the site of setting up. The system checks the position of the sun and controls the movement of a solar panel so that radiation of the sun comes normally to the surface of the solar panel. The developed-tracking system tracks the sun both in the azimuth as well as in the elevation plane. PC based system monitoring facility is also included in the design.

  4. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    International Nuclear Information System (INIS)

    Anderson, Robert J.

    2014-01-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  5. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  6. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  7. Modeling of Target Tracking System for Homing Missiles and Air Defense Systems

    Directory of Open Access Journals (Sweden)

    Yunes Sh. ALQUDSI

    2018-06-01

    Full Text Available One reason of why the guidance and control systems are imperfect is due to the dynamics of both the tracker and the missile, which appears as an error in the alignment with the LOS and delay in the response of the missile to change its orientation. Other reasons are the bias and disturbances as well as the noise about and within the system such as the thermal noise. This paper deals with the tracking system used in the homing guidance and air defense systems. A realistic model for the tracking system model is developed including the receiver servo dynamics and the possible disturbance and noise that may affect the accuracy of the tracking signals measured by the seeker sensor. Modeling the parameters variability and uncertainty is also examined to determine the robustness margin of the tracking system.

  8. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  9. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  10. Automated tracking for advanced satellite laser ranging systems

    Science.gov (United States)

    McGarry, Jan F.; Degnan, John J.; Titterton, Paul J., Sr.; Sweeney, Harold E.; Conklin, Brion P.; Dunn, Peter J.

    1996-06-01

    NASA's Satellite Laser Ranging Network was originally developed during the 1970's to track satellites carrying corner cube reflectors. Today eight NASA systems, achieving millimeter ranging precision, are part of a global network of more than 40 stations that track 17 international satellites. To meet the tracking demands of a steadily growing satellite constellation within existing resources, NASA is embarking on a major automation program. While manpower on the current systems will be reduced to a single operator, the fully automated SLR2000 system is being designed to operate for months without human intervention. Because SLR2000 must be eyesafe and operate in daylight, tracking is often performed in a low probability of detection and high noise environment. The goal is to automatically select the satellite, setup the tracking and ranging hardware, verify acquisition, and close the tracking loop to optimize data yield. TO accomplish the autotracking tasks, we are investigating (1) improved satellite force models, (2) more frequent updates of orbital ephemerides, (3) lunar laser ranging data processing techniques to distinguish satellite returns from noise, and (4) angular detection and search techniques to acquire the satellite. A Monte Carlo simulator has been developed to allow optimization of the autotracking algorithms by modeling the relevant system errors and then checking performance against system truth. A combination of simulator and preliminary field results will be presented.

  11. Evaluation of video detection systems, volume 1 : effects of configuration changes in the performance of video detection systems.

    Science.gov (United States)

    2009-10-01

    The effects of modifying the configuration of three video detection (VD) systems (Iteris, Autoscope, and Peek) : are evaluated in daytime and nighttime conditions. Four types of errors were used: false, missed, stuck-on, and : dropped calls. The thre...

  12. Intelligent Materials Tracking System for Construction Projects Management

    Directory of Open Access Journals (Sweden)

    Narimah Kasim

    2015-05-01

    Full Text Available An essential factor adversely affecting the performance of construction projects is the improper handling of materials during site activities. In addition, paper-based reports are mostly used to record and exchange information related to the material components within the supply chain, which is problematic and inefficient. Generally, technologies (such as wireless systems and RFID are not being adequately used to overcome human errors and are not well integrated with project management systems to make tracking and management of materials easier and faster. Findings from a literature review and surveys showed that there is a lack of positive examples of such tools having been used effectively. Therefore, this research focused on the development of a materials tracking system that integrates RFID-based materials management with resources modelling to improve on-site materials tracking. Rapid prototyping was used to develop the system and testing of the system was carried out to examine the functionality and working appropriately. The proposed system is intended to promote the employment of RFID for automatic materials tracking with integration of resource modelling (Microsoft (R Office Project in the project management system in order to establish which of the tagged components are required resources for certain project tasks. In conclusion, the system provides an automatic and easy tracking method for managing materials during materials delivery and inventory management processes in construction projects.

  13. Methods and Algorithms for Detecting Objects in Video Files

    Directory of Open Access Journals (Sweden)

    Nguyen The Cuong

    2018-01-01

    Full Text Available Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.

  14. Study on the Detection of Moving Target in the Mining Method Based on Hybrid Algorithm for Sports Video Analysis

    Directory of Open Access Journals (Sweden)

    Huang Tian

    2014-10-01

    Full Text Available Moving object detection and tracking is the computer vision and image processing is a hot research direction, based on the analysis of the moving target detection and tracking algorithm in common use, focus on the sports video target tracking non rigid body. In sports video, non rigid athletes often have physical deformation in the process of movement, and may be associated with the occurrence of moving target under cover. Media data is surging to fast search and query causes more difficulties in data. However, the majority of users want to be able to quickly from the multimedia data to extract the interested content and implicit knowledge (concepts, rules, rules, models and correlation, retrieval and query quickly to take advantage of them, but also can provide the decision support problem solving hierarchy. Based on the motion in sport video object as the object of study, conducts the system research from the theoretical level and technical framework and so on, from the layer by layer mining between low level motion features to high-level semantic motion video, not only provides support for users to find information quickly, but also can provide decision support for the user to solve the problem.

  15. Video game use and cognitive performance: does it vary with the presence of problematic video game use?

    Science.gov (United States)

    Collins, Emily; Freeman, Jonathan

    2014-03-01

    Action video game players have been found to outperform nonplayers on a variety of cognitive tasks. However, several failures to replicate these video game player advantages have indicated that this relationship may not be straightforward. Moreover, despite the discovery that problematic video game players do not appear to demonstrate the same superior performance as nonproblematic video game players in relation to multiple object tracking paradigms, this has not been investigated for other tasks. Consequently, this study compared gamers and nongamers in task switching ability, visual short-term memory, mental rotation, enumeration, and flanker interference, as well as investigated the influence of self-reported problematic video game use. A total of 66 participants completed the experiment, 26 of whom played action video games, including 20 problematic players. The results revealed no significant effect of playing action video games, nor any influence of problematic video game play. This indicates that the previously reported cognitive advantages in video game players may be restricted to specific task features or samples. Furthermore, problematic video game play may not have a detrimental effect on cognitive performance, although this is difficult to ascertain considering the lack of video game player advantage. More research is therefore sorely needed.

  16. A practical implementation of free viewpoint video system for soccer games

    Science.gov (United States)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  17. A video imaging system and related control hardware for nuclear safeguards surveillance applications

    International Nuclear Information System (INIS)

    Whichello, J.V.

    1987-03-01

    A novel video surveillance system has been developed for safeguards applications in nuclear installations. The hardware was tested at a small experimental enrichment facility located at the Lucas Heights Research Laboratories. The system uses digital video techniques to store, encode and transmit still television pictures over the public telephone network to a receiver located in the Australian Safeguards Office at Kings Cross, Sydney. A decoded, reconstructed picture is then obtained using a second video frame store. A computer-controlled video cassette recorder is used automatically to archive the surveillance pictures. The design of the surveillance system is described with examples of its operation

  18. OLIVE: Speech-Based Video Retrieval

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Gauvain, Jean-Luc; den Hartog, Jurgen; den Hartog, Jeremy; Netter, Klaus

    1999-01-01

    This paper describes the Olive project which aims to support automated indexing of video material by use of human language technologies. Olive is making use of speech recognition to automatically derive transcriptions of the sound tracks, generating time-coded linguistic elements which serve as the

  19. Realization on the interactive remote video conference system based on multi-Agent

    Directory of Open Access Journals (Sweden)

    Zheng Yan

    2016-01-01

    Full Text Available To make people at different places participate in the same conference, speak and discuss freely, the interactive remote video conferencing system is designed and realized based on multi-Agent collaboration. FEC (forward error correction and tree P2P technology are firstly used to build a live conference structure to transfer audio and video data; then the branch conference port can participate to speak and discuss through the application of becoming a interactive focus; the introduction of multi-Agent collaboration technology improve the system robustness. The experiments showed that, under normal network conditions, the system can support 350 branch conference node simultaneously to make live broadcasting. The audio and video quality is smooth. It can carry out large-scale remote video conference.

  20. Official Union Time Tracking System

    Data.gov (United States)

    Social Security Administration — Official Union Time Tracking System captures the reporting and accounting of the representational activity for all American Federation of Government Employees (AFGE)...

  1. Theory and practice of perceptual video processing in broadcast encoders for cable, IPTV, satellite, and internet distribution

    Science.gov (United States)

    McCarthy, S.

    2014-02-01

    This paper describes the theory and application of a perceptually-inspired video processing technology that was recently incorporated into professional video encoders now being used by major cable, IPTV, satellite, and internet video service providers. We will present data that show that this perceptual video processing (PVP) technology can improve video compression efficiency by up to 50% for MPEG-2, H.264, and High Efficiency Video Coding (HEVC). The PVP technology described in this paper works by forming predicted eye-tracking attractor maps that indicate how likely it might be that a free viewing person would look at particular area of an image or video. We will introduce in this paper the novel model and supporting theory used to calculate the eye-tracking attractor maps. We will show how the underlying perceptual model was inspired by electrophysiological studies of the vertebrate retina, and will explain how the model incorporates statistical expectations about natural scenes as well as a novel method for predicting error in signal estimation tasks. Finally, we will describe how the eye-tracking attractor maps are created in real time and used to modify video prior to encoding so that it is more compressible but not noticeably different than the original unmodified video.

  2. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  3. Wire chamber requirements and tracking simulation studies for tracking systems at the superconducting super collider

    International Nuclear Information System (INIS)

    Hanson, G.G.; Niczyporuk, B.B.; Palounek, A.P.T.

    1989-02-01

    Limitations placed on wire chambers by radiation damage and rate requirements in the SSC environment are reviewed. Possible conceptual designs for wire chamber tracking systems which meet these requirements are discussed. Computer simulation studies of tracking in such systems are presented. Simulations of events from interesting physics at the SSC, including hits from minimum bias background events, are examined. Results of some preliminary pattern recognition studies are given. Such computer simulation studies are necessary to determine the feasibility of wire chamber tracking systems for complex events in a high-rate environment such as the SSC. 11 refs., 9 figs., 1 tab

  4. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  5. Control system design for UAV trajectory tracking

    Science.gov (United States)

    Wang, Haitao; Gao, Jinyuan

    2006-11-01

    In recent years, because of the emerging requirements for increasing autonomy, the controller of uninhabited air vehicles must be augmented with a very sophisticated autopilot design which is capable of tracking complex and agile maneuvering trajectory. This paper provides a simplified control system framework to solve UAV maneuvering trajectory tracking problem. The flight control system is divided into three subsystems including command generation, transformation and allocation. According to the kinematics equations of the aircraft, flight path angle commands can be generated by desired 3D position from path planning. These commands are transformed to body angular rates through direct nonlinear mapping, which is simpler than common multi-loop method based on time scale separation assumption. Then, by using weighted pseudo-inverse method, the control surface deflections are allocated to follow body angular rates from the previous step. In order to improve the robustness, a nonlinear disturbance observer-based approach is used to compensate the uncertainty of system. A 6DOF nonlinear UAV model is controlled to demonstrate the performance of the trajectory tracking control system. Simulation results show that the control strategy is easy to be realized and the precision of tracking is satisfying.

  6. Extending Track Analysis from Animals in the Lab to Moving Objects Anywhere

    NARCIS (Netherlands)

    Dommelen, W. van; Laar, P.J.L.J. van de; Noldus, L.P.J.J.

    2013-01-01

    In this chapter we compare two application domains in which the tracking of objects and the analysis of their movements are core activities, viz. animal tracking and vessel tracking. More specifically, we investigate whether EthoVision XT, a research tool for video tracking and analysis of the

  7. Combating bad weather part I rain removal from video

    CERN Document Server

    Mukhopadhyay, Sudipta

    2015-01-01

    Current vision systems are designed to perform in normal weather condition. However, no one can escape from severe weather conditions. Bad weather reduces scene contrast and visibility, which results in degradation in the performance of various computer vision algorithms such as object tracking, segmentation and recognition. Thus, current vision systems must include some mechanisms that enable them to perform up to the mark in bad weather conditions such as rain and fog. Rain causes the spatial and temporal intensity variations in images or video frames. These intensity changes are due to the

  8. Commercial vehicle route tracking using video detection.

    Science.gov (United States)

    2010-10-31

    Interstate commercial vehicle traffic is a major factor in the life of any road surface. The ability to track : these vehicles and their routes through the state can provide valuable information to planning : activities. We propose a method using vid...

  9. Endoscopic vision-based tracking of multiple surgical instruments during robot-assisted surgery.

    Science.gov (United States)

    Ryu, Jiwon; Choi, Jaesoon; Kim, Hee Chan

    2013-01-01

    Robot-assisted minimally invasive surgery is effective for operations in limited space. Enhancing safety based on automatic tracking of surgical instrument position to prevent inadvertent harmful events such as tissue perforation or instrument collisions could be a meaningful augmentation to current robotic surgical systems. A vision-based instrument tracking scheme as a core algorithm to implement such functions was developed in this study. An automatic tracking scheme is proposed as a chain of computer vision techniques, including classification of metallic properties using k-means clustering and instrument movement tracking using similarity measures, Euclidean distance calculations, and a Kalman filter algorithm. The implemented system showed satisfactory performance in tests using actual robot-assisted surgery videos. Trajectory comparisons of automatically detected data and ground truth data obtained by manually locating the center of mass of each instrument were used to quantitatively validate the system. Instruments and collisions could be well tracked through the proposed methods. The developed collision warning system could provide valuable information to clinicians for safer procedures. © 2012, Copyright the Authors. Artificial Organs © 2012, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  10. Advanced Alignment of the ATLAS Tracking System

    CERN Document Server

    Butti, P; The ATLAS collaboration

    2014-01-01

    In order to reconstruct the trajectories of charged particles, the ATLAS experiment exploits a tracking system built using different technologies, planar silicon modules or microstrips (PIX and SCT detectors) and gaseous drift tubes (TRT), all embedded in a 2T solenoidal magnetic field. Misalignments and deformations of the active detector elements deteriorate the track reconstruction resolution and lead to systematic biases on the measured track parameters. The alignment procedures exploits various advanced tools and techniques in order to determine for module positions and correct for deformations. For the LHC Run II, the system is being upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL).

  11. High-speed holographic correlation system for video identification on the internet

    Science.gov (United States)

    Watanabe, Eriko; Ikeda, Kanami; Kodate, Kashiko

    2013-12-01

    Automatic video identification is important for indexing, search purposes, and removing illegal material on the Internet. By combining a high-speed correlation engine and web-scanning technology, we developed the Fast Recognition Correlation system (FReCs), a video identification system for the Internet. FReCs is an application thatsearches through a number of websites with user-generated content (UGC) and detects video content that violates copyright law. In this paper, we describe the FReCs configuration and an approach to investigating UGC websites using FReCs. The paper also illustrates the combination of FReCs with an optical correlation system, which is capable of easily replacing a digital authorization sever in FReCs with optical correlation.

  12. Track star : Merrick's RFID system tracks oilpatch equipment

    Energy Technology Data Exchange (ETDEWEB)

    Louie, J

    2006-10-15

    Designed by Merrick Systems, the new Rig-Hand system uses radio frequency identification (RFID) in conjunction with downhole components to manage both surface and downhole equipment. The Rig-Hand system consists of RFID tags which can either be installed to new pipe or retrofitted to existing components; handheld or fixed RFID tag readers; and, software modules. The tags can be mounted to a wide range of surface and downhole components. The system can be used by rig personnel to scan tags with handheld computers, or can be used by a designated reader for greater automation and efficiency. The software modules have been designed for typical rig, pipeyard and service company users. The system provides essential drillstring information that helps to reduce pipe failures, fishing costs and downtime. Component tracking may also impact on both personnel safety and operational risks, as a recent survey has suggested that 14 per cent of catastrophic drillstring failures are due to pipe fatigue. When drilling deviated wells, operators typically have difficulty tracking which pipe may have become fatigued. It is anticipated that the system will also allow for improved logistics management, as it is currently estimated that 70 per cent of casing returned from the wellsite to a pipe yard goes to rust. The tagged components will also demand a higher resale or salvage value than untagged components with limited traceability. The system has gone through extensive testing, and to date has provided extensive savings in both onshore and offshore rigs. Commercialization is expected by the end of 2006. 1 fig.

  13. Track star : Merrick's RFID system tracks oilpatch equipment

    International Nuclear Information System (INIS)

    Louie, J.

    2006-01-01

    Designed by Merrick Systems, the new Rig-Hand system uses radio frequency identification (RFID) in conjunction with downhole components to manage both surface and downhole equipment. The Rig-Hand system consists of RFID tags which can either be installed to new pipe or retrofitted to existing components; handheld or fixed RFID tag readers; and, software modules. The tags can be mounted to a wide range of surface and downhole components. The system can be used by rig personnel to scan tags with handheld computers, or can be used by a designated reader for greater automation and efficiency. The software modules have been designed for typical rig, pipeyard and service company users. The system provides essential drillstring information that helps to reduce pipe failures, fishing costs and downtime. Component tracking may also impact on both personnel safety and operational risks, as a recent survey has suggested that 14 per cent of catastrophic drillstring failures are due to pipe fatigue. When drilling deviated wells, operators typically have difficulty tracking which pipe may have become fatigued. It is anticipated that the system will also allow for improved logistics management, as it is currently estimated that 70 per cent of casing returned from the wellsite to a pipe yard goes to rust. The tagged components will also demand a higher resale or salvage value than untagged components with limited traceability. The system has gone through extensive testing, and to date has provided extensive savings in both onshore and offshore rigs. Commercialization is expected by the end of 2006. 1 fig

  14. Video-based real-time on-street parking occupancy detection system

    Science.gov (United States)

    Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang

    2013-10-01

    Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.

  15. VME Switch for CERN's PS Analog Video System

    CERN Document Server

    Acebes, I; Heinze, W; Lewis, J; Serrano, J

    2003-01-01

    Analog video signal switching is used in CERN's Proton Synchrotron (PS) complex to route the video signals coming from Beam Diagnostics systems to the Meyrin Control Room (MCR). Traditionally, this has been done with custom electromechanical relay-based cards controlled serially via CAMAC crates. In order to improve the robustness and maintainability of the system, while keeping it analog to preserve the low latency, a VME card based on Analog Devices' AD8116 analog matrix chip has been developed. Video signals go into the front panel and exit the switch through the P2 connector of the VME backplane. The module is a 16 input, 32 output matrix. Larger matrices can be built using more modules and bussing their outputs together, thanks to the high impedance feature of the AD8116. Another VME module takes the selected signals from the P2 connector and performs automatic gain to send them at nominal output level through its front panel. This paper discusses both designs and presents experimental test results.

  16. NGSI: Function Requirements for a Cylinder Tracking System

    International Nuclear Information System (INIS)

    Branney, S.

    2012-01-01

    While nuclear suppliers currently track uranium hexafluoride (UF 6 ) cylinders in various ways, for their own purposes, industry practices vary significantly. The NNSA Office of Nonproliferation and International Security's Next Generation Safeguards Initiative (NGSI) has begun a 5-year program to investigate the concept of a global monitoring scheme that uniquely identifies and tracks UF 6 cylinders. As part of this effort, NGSI's multi-laboratory team has documented the 'life of a UF 6 cylinder' and reviewed IAEA practices related to UF 6 cylinders. Based on this foundation, this paper examines the functional requirements of a system that would uniquely identify and track UF 6 cylinders. There are many considerations for establishing a potential tracking system. Some of these factors include the environmental conditions a cylinder may be expected to be exposed to, where cylinders may be particularly vulnerable to diversion, how such a system may be integrated into the existing flow of commerce, how proprietary data generated in the process may be protected, what a system may require in terms of the existing standard for UF 6 cylinder manufacture or modifications to it and what the limiting technology factors may be. It is desirable that a tracking system should provide benefit to industry while imposing as few additional constraints as possible and still meeting IAEA safeguards objectives. This paper includes recommendations for this system and the analysis that generated them.

  17. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  18. Evolution of the 3-dimensional video system for facial motion analysis: ten years' experiences and recent developments.

    Science.gov (United States)

    Tzou, Chieh-Han John; Pona, Igor; Placheta, Eva; Hold, Alina; Michaelidou, Maria; Artner, Nicole; Kropatsch, Walter; Gerber, Hans; Frey, Manfred

    2012-08-01

    Since the implementation of the computer-aided system for assessing facial palsy in 1999 by Frey et al (Plast Reconstr Surg. 1999;104:2032-2039), no similar system that can make an objective, three-dimensional, quantitative analysis of facial movements has been marketed. This system has been in routine use since its launch, and it has proven to be reliable, clinically applicable, and therapeutically accurate. With the cooperation of international partners, more than 200 patients were analyzed. Recent developments in computer vision--mostly in the area of generative face models, applying active--appearance models (and extensions), optical flow, and video-tracking-have been successfully incorporated to automate the prototype system. Further market-ready development and a business partner will be needed to enable the production of this system to enhance clinical methodology in diagnostic and prognostic accuracy as a personalized therapy concept, leading to better results and higher quality of life for patients with impaired facial function.

  19. Detection Thresholds for Rotation and Translation Gains in 360° Video-Based Telepresence Systems.

    Science.gov (United States)

    Zhang, Jingxin; Langbehn, Eike; Krupke, Dennis; Katzakis, Nicholas; Steinicke, Frank

    2018-04-01

    Telepresence systems have the potential to overcome limits and distance constraints of the real-world by enabling people to remotely visit and interact with each other. However, current telepresence systems usually lack natural ways of supporting interaction and exploration of remote environments (REs). In particular, single webcams for capturing the RE provide only a limited illusion of spatial presence, and movement control of mobile platforms in today's telepresence systems are often restricted to simple interaction devices. One of the main challenges of telepresence systems is to allow users to explore a RE in an immersive, intuitive and natural way, e.g., by real walking in the user's local environment (LE), and thus controlling motions of the robot platform in the RE. However, the LE in which the user's motions are tracked usually provides a much smaller interaction space than the RE. In this context, redirected walking (RDW) is a very suitable approach to solve this problem. However, so far there is no previous work, which explored if and how RDW can be used in video-based 360° telepresence systems. In this article, we conducted two psychophysical experiments in which we have quantified how much humans can be unknowingly redirected on virtual paths in the RE, which are different from the physical paths that they actually walk in the LE. Experiment 1 introduces a discrimination task between local and remote translations, and in Experiment 2 we analyzed the discrimination between local and remote rotations. In Experiment 1 participants performed straightforward translations in the LE that were mapped to straightforward translations in the RE shown as 360° videos, which were manipulated by different gains. Then, participants had to estimate if the remotely perceived translation was faster or slower than the actual physically performed translation. Similarly, in Experiment 2 participants performed rotations in the LE that were mapped to the virtual rotations

  20. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    Science.gov (United States)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted

  1. Maximum-power-point tracking control of solar heating system

    KAUST Repository

    Huang, Bin-Juine; Ton, Wei-Zhe; Wu, Chen-Chun; Ko, Hua-Wei; Chang, Hsien-Shun; Yen, Rue-Her; Wang, Jiunn-Cherng

    2012-01-01

    was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI

  2. Exterior field evaluation of new generation video motion detection systems

    International Nuclear Information System (INIS)

    Malone, T.P.

    1988-01-01

    Recent advancements in video motion detection (VMD) system design and technology have resulted in several new commercial VMD systems. Considerable interest in the new VMD systems has been generated because the systems are advertised to work effectively in exterior applications. Previous VMD systems, when used in an exterior environment, tended to have very high nuisance alarm rates due to weather conditions, wildlife activity and lighting variations. The new VMD systems advertise more advanced processing of the incoming video signal which is aimed at rejecting exterior environmental nuisance alarm sources while maintaining a high detection capability. This paper discusses the results of field testing, in an exterior environment, of two new VMD systems

  3. Assignment and Correspondence Tracking System - Tactical / Operational Reporting

    Data.gov (United States)

    Social Security Administration — Reporting data store for the Assignment and Correspondence Tracking System (ACT). ACT automates the assignment and tracking of correspondence processing within the...

  4. Spatio-temporal features for tracking and quadruped/biped discrimination

    Science.gov (United States)

    Rickman, Rick; Copsey, Keith; Bamber, David C.; Page, Scott F.

    2012-05-01

    Techniques such as SIFT and SURF facilitate efficient and robust image processing operations through the use of sparse and compact spatial feature descriptors and show much potential for defence and security applications. This paper considers the extension of such techniques to include information from the temporal domain, to improve utility in applications involving moving imagery within video data. In particular, the paper demonstrates how spatio-temporal descriptors can be used very effectively as the basis of a target tracking system and as target discriminators which can distinguish between bipeds and quadrupeds. Results using sequences of video imagery of walking humans and dogs are presented, and the relative merits of the approach are discussed.

  5. Tracking Behavioral Progress within a Children's Mental Health System: The Vermont Community Adjustment Tracking System.

    Science.gov (United States)

    Bruns, Eric J.; Burchard, John D.; Froelich, Peter; Yoe, James T.; Tighe, Theodore

    1998-01-01

    Describes the Vermont Community Adjustment Tracking System (VT-CATS), which utilizes four behavioral instruments to allow intensive, ongoing, and interpretable behavioral assessment of a service system's most challenging children and adolescents. Also explains the adjustment indicator checklists and the ability of VT-CATS to address agencies'…

  6. Alignment of the ALICE Inner Tracking System with cosmic-ray tracks

    CERN Document Server

    Aamodt, K; Abeysekara, U; Abrahantes Quintana, A; Adamová, D; Aggarwal, M M; Aglieri Rinella, G; Agocs, A G; Aguilar Salazar, S; Ahammed, Z; Ahmad, A; Ahmad, N; Ahn, S U; Akimoto, R; Akindinov, A; Aleksandrov, D; Alessandro, B; Alfaro Molina, R; Alici, A; Almaráz Aviña, E; Alme, J; Altini, V; Altinpinar, S; Alt, T; Andrei, C; Andronic, A; Anelli, G; Angelov, V; Anson, C; Anticic, T; Antinori, F; Antinori, S; Antipin, K; Antonczyk, D; Antonioli, P; Anzo, A; Aphecetche, L; Appelshäuser, H; Arcelli, S; Arceo, R; Arend, A; Armesto, N; Arnaldi, R; Aronsson, T; Arsene, I C; Asryan, A; Augustinus, A; Averbeck, R; Awes, T C; Äystö, J; Azmi, M D; Bablok, S; Bach, M; Badalà, A; Baek, Y W; Bagnasco, S; Bailhache, R; Bala, R; Baldisseri, A; Baldit, A; Bán, J; Barbera, R; Barile, F; Barnaföldi, G G; Barnby, L; Barret, V; Bartke, J; Basile, M; Basmanov, V; Bastid, N; Bathen, B; Batigne, G; Batyunya, B; Baumann, C; Bearden, I G; Becker, B; Belikov, I; Bellwied, R; Belmont-Moreno, E; Belogianni, A; Benhabib, L; Beolé, S; Berceanu, I; Bercuci, A; Berdermann, E; Berdnikov, Y; Betev, L; Bhasin, A; Bhati, A K; Bianchi, L; Bianchin, C; Bianchi, N; Bielcík, J; Bielcíková, J; Bilandzic, A; Bimbot, L; Biolcati, E; Blanc, A; Blanco, F; Blanco, F; Blau, D; Blume, C; Boccioli, M; Bock, N; Bogdanov, A; Bøggild, H; Bogolyubsky, M; Bohm, J; Boldizsár, L; Bombara, M; Bombonati, C; Bondila, M; Borel, H; Borshchov, V; Bortolin, C; Bose, S; Bosisio, L; Bossú, F; Botje, M; Böttger, S; Bourdaud, G; Boyer, B; Braun, M; Braun-Munzinger, P; Bravina, L; Bregant, M; Breitner, T; Bruckner, G; Bruna, E; Bruno, G E; Brun, R; Budnikov, D; Buesching, H; Bugaev, K; Buncic, P; Busch, O; Buthelezi, Z; Caffarri, D; Caines, H; Cai, X; Camacho, E; Camerini, P; Campbell, M; Canoa Roman, V; Capitani, G P; Cara Romeo, G; Carena, F; Carena, W; Carminati, F; Casanova Díaz, A; Caselle, M; Castillo Castellanos, J; Castillo Hernandez, J F; Catanescu, V; Cattaruzza, E; Cavicchioli, C; Cerello, P; Chambert, V; Chang, B; Chapeland, S; Charpy, A; Charvet, J L; Chattopadhyay, S; Chattopadhyay, S; Cherney, M; Cheshkov, C; Cheynis, B; Chiavassa, E; Chibante Barroso, V; Chinellato, D D; Chochula, P; Choi, K; Chojnacki, M; Christakoglou, P; Christensen, C H; Christiansen, P; Chujo, T; Chuman, F; Cicalo, C; Cifarelli, L; Cindolo, F; Cleymans, J; Cobanoglu, O; Coffin, J P; Coli, S; Colla, A; Conesa Balbastre, G; Conesa del Valle, Z; Conner, E S; Constantin, P; Contin, G; Contreras, J G; Cormier, T M; Corrales Morales, Y; Cortese, P; Cortés Maldonado, I; Cosentino, M R; Costa, F; Cotallo, M E; Crescio, E; Crochet, P; Cuautle, E; Cunqueiro, L; Cussonneau, J; Dainese, A; Dalsgaard, H H; Danu, A; Dash, A; Dash, S; Das, I; Das, S; de Barros, G O V; De Caro, A; de Cataldo, G; de Cuveland, J; De Falco, A; De Gaspari, M; de Groot, J; De Gruttola, D; de Haas, A P; De Marco, N; De Pasquale, S; De Remigis, R; de Rooij, R; de Vaux, G; Delagrange, H; Dellacasa, G; Deloff, A; Demanov, V; Dénes, E; Deppman, A; D'Erasmo, G; Derkach, D; Devaux, A; Di Bari, D; Di Giglio, C; Di Liberto, S; Di Mauro, A; Di Nezza, P; Dialinas, M; Díaz, L; Díaz, R; Dietel, T; Ding, H; Divià, R; Djuvsland, Ø; do Amaral Valdiviesso, G; Dobretsov, V; Dobrin, A; Dobrowolski, T; Dönigus, B; Domínguez, I; Dordic, O; Dubey, A K; Dubuisson, J; Ducroux, L; Dupieux, P; Dutta Majumdar, A K; Dutta Majumdar, M R; Elia, D; Emschermann, D; Enokizono, A; Espagnon, B; Estienne, M; Evans, D; Evrard, S; Eyyubova, G; Fabjan, C W; Fabris, D; Faivre, J; Falchieri, D; Fantoni, A; Fasel, M; Fearick, R; Fedunov, A; Fehlker, D; Fekete, V; Felea, D; Fenton-Olsen, B; Feofilov, G; Fernández Téllez, A; Ferreiro, E G; Ferretti, A; Ferretti, R; Figueredo, M A S; Filchagin, S; Fini, R; Fionda, F M; Fiore, E M; Floris, M; Fodor, Z; Foertsch, S; Foka, P; Fokin, S; Formenti, F; Fragiacomo, E; Fragkiadakis, M; Frankenfeld, U; Frolov, A; Fuchs, U; Furano, F; Furget, C; Fusco Girard, M; Gaardhøje, J J; Gadrat, S; Gagliardi, M; Gago, A; Gallio, M; Ganoti, P; Ganti, M S; Garabatos, C; García Trapaga, C; Gebelein, J; Gemme, R; Germain, M; Gheata, A; Gheata, M; Ghidini, B; Ghosh, P; Giraudo, G; Giubellino, P; Gladysz-Dziadus, E; Glasow, R; Glässel, P; Glenn, A; Gomez, R; González Santos, H; González-Trueba, L H; González-Zamora, P; Gorbunov, S; Gorbunov, Y; Gotovac, S; Gottschlag, H; Grabski, V; Grajcarek, R; Grelli, A; Grigoras, A; Grigoras, C; Grigoriev, V; Grigoryan, A; Grinyov, B; Grion, N; Gros, P; Grosse-Oetringhaus, J F; Grossiord, J Y; Grosso, R; Guarnaccia, C; Guber, F; Guernane, R; Guerzoni, B; Gulbrandsen, K; Gulkanyan, H; Gunji, T; Gupta, A; Gupta, R; Gustafsson, H A; Gutbrod, H; Haaland, Ø; Hadjidakis, C; Haiduc, M; Hamagaki, H; Hamar, G; Hamblen, J; Han, B H; Harris, J W; Hartig, M; Harutyunyan, A; Hasch, D; Hasegan, D; Hatzifotiadou, D; Hayrapetyan, A; Heide, M; Heinz, M; Helstrup, H; Herghelegiu, A; Hernández, C; Herrera Corral, G; Herrmann, N; Hetland, K F; Hicks, B; Hiei, A; Hille, P T; Hippolyte, B; Horaguchi, T; Hori, Y; Hristov, P; Hrivnácová, I; Huber, S; Humanic, T J; Hu, S; Hutter, D; Hwang, D S; Ichou, R; Ilkaev, R; Ilkiv, I; Innocenti, P G; Ippolitov, M; Irfan, M; Ivan, C; Ivanov, A; Ivanov, M; Ivanov, V; Iwasaki, T; Jachokowski, A; Jacobs, P; Jancurová, L; Jangal, S; Janik, R; Jayananda, K; Jena, C; Jena, S; Jirden, L; Jones, G T; Jones, P G; Jovanovic, P; Jung, H; Jung, W; Jusko, A; Kaidalov, A B; Kalcher, S; Kalinák, P; Kalliokoski, T; Kalweit, A; Kamal, A; Kamermans, R; Kanaki, K; Kang, E; Kang, J H; Kapitan, J; Kaplin, V; Kapusta, S; Karavicheva, T; Karpechev, E; Kazantsev, A; Kebschull, U; Keidel, R; Khan, M M; Khan, S A; Khanzadeev, A; Kharlov, Y; Kikola, D; Kileng, B; Kim, D J; Kim, D S; Kim, D W; Kim, H N; Kim, J H; Kim, J; Kim, J S; Kim, M; Kim, M; Kim, S H; Kim, S; Kim, Y; Kirsch, S; Kiselev, S; Kisel, I; Kisiel, A; Klay, J L; Klein-Bösing, C; Klein, J; Kliemant, M; Klovning, A; Kluge, A; Kniege, S; Koch, K; Kolevatov, R; Kolojvari, A; Kondratiev, V; Kondratyeva, N; Konevskih, A; Kornas, E; Kour, R; Kowalski, M; Kox, S; Kozlov, K; Králik, I; Kral, J; Kramer, F; Kraus, I; Kravcáková, A; Krawutschke, T; Krivda, M; Krumbhorn, D; Krus, M; Kryshen, E; Krzewicki, M; Kucheriaev, Y; Kuhn, C; Kuijer, P G; Kumar, L; Kumar, N; Kupczak, R; Kurashvili, P; Kurepin, A; Kurepin, A N; Kuryakin, A; Kushpil, S; Kushpil, V; Kutouski, M; Kvaerno, H; Kweon, M J; Kwon, Y; Lackner, F; Ladrón de Guevara, P; Lafage, V; Lal, C; Lara, C; La Rocca, P; Larsen, D T; Laurenti, G; Lazzeroni, C; Le Bornec, Y; Le Bris, N; Lee, H; Lee, K S; Lee, S C; Lefèvre, F; Lehnert, J; Leistam, L; Lenhardt, M; Lenti, V; León, H; León Monzón, I; León Vargas, H; Lévai, P; Lietava, R; Lindal, S; Lindenstruth, V; Lippmann, C; Lisa, M A; Listratenko, O; Liu, L; Li, Y; Loginov, V; Lohn, S; López Noriega, M; López-Ramírez, R; López Torres, E; Lopez, X; Løvhøiden, G; Lozea Feijo Soares, A; Lunardon, M; Luparello, G; Luquin, L; Lu, S; Lutz, J R; Luvisetto, M; Madagodahettige-Don, D M; Maevskaya, A; Mager, M; Mahajan, A; Mahapatra, D P; Maire, A; Makhlyueva, I; Ma, K; Malaev, M; Maldonado Cervantes, I; Malek, M; Mal'Kevich, D; Malkiewicz, T; Malzacher, P; Mamonov, A; Manceau, L; Mangotra, L; Manko, V; Manso, F; Manzari, V; Mao, Y; Mares, J; Margagliotti, G V; Margotti, A; Marín, A; Martashvili, I; Martinengo, P; Martínez Davalos, A; Martínez García, G; Martínez, M I; Maruyama, Y; Ma, R; Marzari Chiesa, A; Masciocchi, S; Masera, M; Masetti, M; Masoni, A; Massacrier, L; Mastromarco, M; Mastroserio, A; Matthews, Z L; Mattos Tavares, B; Matyja, A; Mayani, D; Mazza, G; Mazzoni, M A; Meddi, F; Menchaca-Rocha, A; Mendez Lorenzo, P; Meoni, M; Mercado Pérez, J; Mereu, P; Miake, Y; Michalon, A; Miftakhov, N; Milosevic, J; Minafra, F; Mischke, A; Miskowiec, D; Mitu, C; Mizoguchi, K; Mlynarz, J; Mohanty, B; Molnar, L; Mondal, M M; Montaño Zetina, L; Monteno, M; Montes, E; Morando, M; Moretto, S; Morsch, A; Moukhanova, T; Muccifora, V; Mudnic, E; Muhuri, S; Müller, H; Munhoz, M G; Munoz, J; Musa, L; Musso, A; Nandi, B K; Nania, R; Nappi, E; Navach, F; Navin, S; Nayak, T K; Nazarenko, S; Nazarov, G; Nedosekin, A; Nendaz, F; Newby, J; Nianine, A; Nicassio, M; Nielsen, B S; Nikolaev, S; Nikolic, V; Nikulin, S; Nikulin, V; Nilsen, B S; Nilsson, M S; Noferini, F; Nomokonov, P; Nooren, G; Novitzky, N; Nyatha, A; Nygaard, C; Nyiri, A; Nystrand, J; Ochirov, A; Odyniec, G; Oeschler, H; Oinonen, M; Okada, K; Okada, Y; Oldenburg, M; Oleniacz, J; Oppedisano, C; Orsini, F; Ortíz Velázquez, A; Ortona, G; Oskamp, C; Oskarsson, A; Osmic, F; Österman, L; Ostrowski, P; Otterlund, I; Otwinowski, J; Øvrebekk, G; Oyama, K; Ozawa, K; Pachmayer, Y; Pachr, M; Padilla, F; Pagano, P; Paic, G; Painke, F; Pajares, C; Palaha, A; Palmeri, A; Pal, S K; Pal, S; Panse, R; Pappalardo, G S; Park, W J; Pastircák, B; Pastore, C; Paticchio, V; Pavlinov, A; Pawlak, T; Peitzmann, T; Pepato, A; Pereira, H; Peressounko, D; Pérez, C; Perini, D; Perrino, D; Peryt, W; Peschek, J; Pesci, A; Peskov, V; Pestov, Y; Peters, A J; Petrácek, V; Petridis, A; Petris, M; Petrovici, M; Petrov, P; Petta, C; Peyré, J; Piano, S; Piccotti, A; Pikna, M; Pillot, P; Pinsky, L; Pitz, N; Piuz, F; Platt, R; Pluta, J; Pocheptsov, T; Pochybova, S; Podesta Lerma, P L M; Poggio, F; Poghosyan, M G; Poghosyan, T; Polák, K; Polichtchouk, B; Polozov, P; Polyakov, V; Pommeresch, B; Pop, A; Posa, F; Poskon, M; Pospisil, V; Potukuchi, B; Pouthas, J; Prasad, S K; Preghenella, R; Prino, F; Pruneau, C A; Pshenichnov, I; Puddu, G; Pujahari, P; Pulvirenti, A; Punin, A; Punin, V; Putis, M; Putschke, J; Quercigh, E; Rachevski, A; Rademakers, A; Radomski, S; Räihä, T S; Rak, J; Rakotozafindrabe, A; Ramello, L; Ramírez Reyes, A; Rammler, M; Raniwala, R; Raniwala, S; Räsänen, S; Rashevskaya, I; Rath, S; Read, K F; Real, J; Redlich, K; Renfordt, R; Reolon, A R; Reshetin, A; Rettig, F; Revol, J P; Reygers, K; Ricaud, H; Riccati, L; Ricci, R A; Richter, M; Riedler, P; Riegler, W; Riggi, F; Rivetti, A; Rodriguez Cahuantzi, M; Røed, K; Röhrich, D; Román López, S; Romita, R; Ronchetti, F; Rosinský, P; Rosnet, P; Rossegger, S; Rossi, A; Roukoutakis, F; Rousseau, S; Roy, C; Roy, P; Rubio-Montero, A J; Rui, R; Rusanov, I; Russo, G; Ryabinkin, E; Rybicki, A; Sadovsky, S; Safarík, K; Sahoo, R; Saini, J; Saiz, P; Sakata, D; Salgado, C A; Salgueiro Dominques da Silva, R; Salur, S; Samanta, T; Sambyal, S; Samsonov, V; Sándor, L; Sandoval, A; Sano, M; Sano, S; Santo, R; Santoro, R; Sarkamo, J; Saturnini, P; Scapparone, E; Scarlassara, F; Scharenberg, R P; Schiaua, C; Schicker, R; Schindler, H; Schmidt, C; Schmidt, H R; Schossmaier, K; Schreiner, S; Schuchmann, S; Schukraft, J; Schutz, Y; Schwarz, K; Schweda, K; Scioli, G; Scomparin, E; Segato, G; Semenov, D; Senyukov, S; Seo, J; Serci, S; Serkin, L; Serradilla, E; Sevcenco, A; Sgura, I; Shabratova, G; Shahoyan, R; Sharkov, G; Sharma, N; Sharma, S; Shigaki, K; Shimomura, M; Shtejer, K; Sibiriak, Y; Siciliano, M; Sicking, E; Siddi, E; Siemiarczuk, T; Silenzi, A; Silvermyr, D; Simili, E; Simonetti, G; Singaraju, R; Singhal, V; Singh, R; Sinha, B C; Sinha, T; Sitar, B; Sitta, M; Skaali, T B; Skjerdal, K; Smakal, R; Smirnov, N; Snellings, R; Snow, H; Søgaard, C; Sokolov, O; Soloviev, A; Soltveit, H K; Soltz, R; Sommer, W; Son, C W; Song, M; Son, H S; Soos, C; Soramel, F; Soyk, D; Spyropoulou-Stassinaki, M; Srivastava, B K; Stachel, J; Staley, F; Stan, I; Stefanek, G; Stefanini, G; Steinbeck, T; Stenlund, E; Steyn, G; Stocco, D; Stock, R; Stolpovsky, P; Strmen, P; Suaide, A A P; Subieta Vásquez, M A; Sugitate, T; Suire, C; Sumbera, M; Susa, T; Swoboda, D; Symons, J; Szanto de Toledo, A; Szarka, I; Szostak, A; Szuba, M; Tadel, M; Tagridis, C; Takahara, A; Takahashi, J; Tanabe, R; Tapia Takaki, J D; Taureg, H; Tauro, A; Tavlet, M; Tejeda Muñoz, G; Telesca, A; Terrevoli, C; Thäder, J; Tieulent, R; Tlusty, D; Toia, A; Tolyhy, T; Torcato de Matos, C; Torii, H; Torralba, G; Toscano, L; Tosello, F; Tournaire, A; Traczyk, T; Tribedy, P; Tröger, G; Truesdale, D; Trzaska, W H; Tsiledakis, G; Tsilis, E; Tsuji, T; Tumkin, A; Turrisi, R; Turvey, A; Tveter, T S; Tydesjö, H; Tywoniuk, K; Ulery, J; Ullaland, K; Uras, A; Urbán, J; Urciuoli, G M; Usai, G L; Vacchi, A; Vala, M; Valencia Palomo, L; Vallero, S; van den Brink, A; van der Kolk, N; Vande Vyvre, P; van Leeuwen, M; Vannucci, L; Vargas, A; Varma, R; Vasiliev, A; Vassiliev, I; Vassiliou, M; Vechernin, V; Venaruzzo, M; Vercellin, E; Vergara, S; Vernet, R; Verweij, M; Vetlitskiy, I; Vickovic, L; Viesti, G; Vikhlyantsev, O; Vilakazi, Z; Villalobos Baillie, O; Vinogradov, A; Vinogradov, L; Vinogradov, Y; Virgili, T; Viyogi, Y P; Vodopianov, A; Voloshin, K; Voloshin, S; Volpe, G; von Haller, B; Vranic, D; Vrláková, J; Vulpescu, B; Wagner, B; Wagner, V; Wallet, L; Wan, R; Wang, D; Wang, Y; Watanabe, K; Wen, Q; Wessels, J; Wiechula, J; Wikne, J; Wilk, A; Wilk, G; Williams, M C S; Willis, N; Windelband, B; Xu, C; Yang, C; Yang, H; Yasnopolsky, A; Yermia, F; Yi, J; Yin, Z; Yokoyama, H; Yoo, I-K; Yuan, X; Yushmanov, I; Zabrodin, E; Zagreev, B; Zalite, A; Zampolli, C; Zanevsky, Yu; Zaporozhets, Y; Zarochentsev, A; Závada, P; Zbroszczyk, H; Zelnicek, P; Zenin, A; Zepeda, A; Zgura, I; Zhalov, M; Zhang, X; Zhou, D; Zhou, S; Zhu, J; Zichichi, A; Zinchenko, A; Zinovjev, G; Zinovjev, M; Zoccarato, Y; Zychácek, V

    2010-01-01

    ALICE (A Large Ion Collider Experiment) is the LHC (Large Hadron Collider) experiment devoted to investigating the strongly interacting matter created in nucleus-nucleus collisions at the LHC energies. The ALICE ITS, Inner Tracking System, consists of six cylindrical layers of silicon detectors with three different technologies; in the outward direction: two layers of pixel detectors, two layers each of drift, and strip detectors. The number of parameters to be determined in the spatial alignment of the 2198 sensor modules of the ITS is about 13,000. The target alignment precision is well below 10 micron in some cases (pixels). The sources of alignment information include survey measurements, and the reconstructed tracks from cosmic rays and from proton-proton collisions. The main track-based alignment method uses the Millepede global approach. An iterative local method was developed and used as well. We present the results obtained for the ITS alignment using about 10^5 charged tracks from cosmic rays that h...

  7. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  8. Specialized video systems for use in waste tanks

    International Nuclear Information System (INIS)

    Anderson, E.K.; Robinson, C.W.; Heckendorn, F.M.

    1992-01-01

    The Robotics Development Group at the Savannah River Site is developing a remote video system for use in underground radioactive waste storage tanks at the Savannah River Site, as a portion of its site support role. Viewing of the tank interiors and their associated annular spaces is an extremely valuable tool in assessing their condition and controlling their operation. Several specialized video systems have been built that provide remote viewing and lighting, including remotely controlled tank entry and exit. Positioning all control components away from the facility prevents the potential for personnel exposure to radiation and contamination. The SRS waste tanks are nominal 4.5 million liter (1.3 million gallon) underground tanks used to store liquid high level radioactive waste generated by the site, awaiting final disposal. The typical waste tank (Figure 1) is of flattened shape (i.e. wider than high). The tanks sit in a dry secondary containment pan. The annular space between the tank wall and the secondary containment wall is continuously monitored for liquid intrusion and periodically inspected and documented. The latter was historically accomplished with remote still photography. The video systems includes camera, zoom lens, camera positioner, and vertical deployment. The assembly enters through a 125 mm (5 in) diameter opening. A special attribute of the systems is they never get larger than the entry hole during camera aiming etc. and can always be retrieved. The latest systems are easily deployable to a remote setup point and can extend down vertically 15 meters (50ft). The systems are expected to be a valuable asset to tank operations

  9. Robust Solar Position Sensor for Tracking Systems

    DEFF Research Database (Denmark)

    Ritchie, Ewen; Argeseanu, Alin; Leban, Krisztina Monika

    2009-01-01

    The paper proposes a new solar position sensor used in tracking system control. The main advantages of the new solution are the robustness and the economical aspect. Positioning accuracy of the tracking system that uses the new sensor is better than 1°. The new sensor uses the ancient principle...... of the solar clock. The sensitive elements are eight ordinary photo-resistors. It is important to note that all the sensors are not selected simultaneously. It is not necessary for sensor operating characteristics to be quasi-identical because the sensor principle is based on extreme operating duty measurement...... (bright or dark). In addition, the proposed solar sensor significantly simplifies the operation of the tracking control device....

  10. Status, recent developments and perspective of TINE-powered video system, release 3

    International Nuclear Information System (INIS)

    Weisse, S.; Melkumyan, D.; Duval, P.

    2012-01-01

    Experience has shown that imaging software and hardware installations at accelerator facilities needs to be changed, adapted and updated on a semi-permanent basis. On this premise the component-based core architecture of Video System 3 was founded. In design and implementation, emphasis was, is, and will be put on flexibility, performance, low latency, modularity, inter operability, use of open source, ease of use as well as reuse, good documentation and multi-platform capability. In the past year, a milestone was reached as Video System 3 entered production-level at PITZ, Hasylab and PETRA III. Since then, the development path has been more strongly influenced by production-level experience and customer feedback. In this contribution, we describe the current status, layout, recent developments and perspective of the Video System. Focus will be put on integration of recording and playback of video sequences to Archive/DAQ, a standalone installation of the Video System on a notebook as well as experiences running on Windows 7-64 bit. In addition, new client-side multi-platform GUI/application developments using Java are about to hit the surface. Last but not least it must be mentioned that although the implementation of Release 3 is integrated into the TINE control system, it is modular enough so that integration into other control systems can be considered. (authors)

  11. Supercavitating Projectile Tracking System and Method

    Science.gov (United States)

    2009-12-30

    Distribution is unlimited 20100104106 Attorney Docket No. 96681 SUPERCAVITATING PROJECTILE TRACKING SYSTEM AND METHOD STATEMENT OF GOVERNMENT...underwater track or path 14 of a supercavitating vehicle under surface 16 of a body of water. In this embodiment, passive acoustic or pressure...transducers 12 are utilized to measure a pressure field produced by a moving supercavitating vehicle. The present invention provides a low-cost, reusable

  12. Real-time high-level video understanding using data warehouse

    Science.gov (United States)

    Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois

    2006-02-01

    High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.

  13. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  14. Dynamic Ocean Track System Plus -

    Data.gov (United States)

    Department of Transportation — Dynamic Ocean Track System Plus (DOTS Plus) is a planning tool implemented at the ZOA, ZAN, and ZNY ARTCCs. It is utilized by Traffic Management Unit (TMU) personnel...

  15. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  16. Real-time resource allocation for tracking systems

    NARCIS (Netherlands)

    Satsangi, Y.; Whiteson, S.; Oliehoek, F.A.; Bouma, H.

    2017-01-01

    Automated tracking is key to many computer vision applications. However, many tracking systems struggle to perform in real-time due to the high computational cost of detecting people, especially in ultra high resolution images. We propose a new algorithm called PartiMax that greatly reduces this

  17. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  18. Development of the video streaming system for the radiation safety training

    International Nuclear Information System (INIS)

    Uemura, Jitsuya

    2005-01-01

    Radiation workers have to receive the radiation safety training every year. It is very hard for them to receive the training within a limited chance of training. Then, we developed the new training system using the video streaming technique and opened the web page for the training on our homepage. Every worker is available to receive the video lecture at any time and at any place by using his PC via internet. After watching the video, the worker should receive the completion examination. It he can pass the examination, he was registered as a radiation worker by the database system for radiation control. (author)

  19. Patient tracking system

    International Nuclear Information System (INIS)

    Chapman, L.J.; Hakimi, R.; Salehi, D.; McCord, T.; Zionczkowski, B.; Churchill, R.

    1987-01-01

    This exhibit describes computer applications in monitoring patient tracking in radiology and the collection of management information (technologist productivity, patient waiting times, repeat rate, room utilization) and quality assurance information. An analysis of the reports that assist in determining staffing levels, training needs, and patient scheduling is presented. The system is designed to require minimal information input and maximal information output to assist radiologists, quality assurance coordinators, and management personnel in departmental operations

  20. Gaze inspired subtitle position evaluation for MOOCs videos

    Science.gov (United States)

    Chen, Hongli; Yan, Mengzhen; Liu, Sijiang; Jiang, Bo

    2017-06-01

    Online educational resources, such as MOOCs, is becoming increasingly popular, especially in higher education field. One most important media type for MOOCs is course video. Besides traditional bottom-position subtitle accompany to the videos, in recent years, researchers try to develop more advanced algorithms to generate speaker-following style subtitles. However, the effectiveness of such subtitle is still unclear. In this paper, we investigate the relationship between subtitle position and the learning effect after watching the video on tablet devices. Inspired with image based human eye tracking technique, this work combines the objective gaze estimation statistics with subjective user study to achieve a convincing conclusion - speaker-following subtitles are more suitable for online educational videos.

  1. Portable digital video surveillance system for monitoring flower-visiting bumblebees

    Directory of Open Access Journals (Sweden)

    Thorsdatter Orvedal Aase, Anne Lene

    2011-08-01

    Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.

  2. GPS-based tracking system for TOPEX orbit determination

    Science.gov (United States)

    Melbourne, W. G.

    1984-01-01

    A tracking system concept is discussed that is based on the utilization of the constellation of Navstar satellites in the Global Positioning System (GPS). The concept involves simultaneous and continuous metric tracking of the signals from all visible Navstar satellites by approximately six globally distributed ground terminals and by the TOPEX spacecraft at 1300-km altitude. Error studies indicate that this system could be capable of obtaining decimeter position accuracies and, most importantly, around 5 cm in the radial component which is key to exploiting the full accuracy potential of the altimetric measurements for ocean topography. Topics covered include: background of the GPS, the precision mode for utilization of the system, past JPL research for using the GPS in precision applications, the present tracking system concept for high accuracy satellite positioning, and results from a proof-of-concept demonstration.

  3. Toward 3D-IPTV: design and implementation of a stereoscopic and multiple-perspective video streaming system

    Science.gov (United States)

    Petrovic, Goran; Farin, Dirk; de With, Peter H. N.

    2008-02-01

    3D-Video systems allow a user to perceive depth in the viewed scene and to display the scene from arbitrary viewpoints interactively and on-demand. This paper presents a prototype implementation of a 3D-video streaming system using an IP network. The architecture of our streaming system is layered, where each information layer conveys a single coded video signal or coded scene-description data. We demonstrate the benefits of a layered architecture with two examples: (a) stereoscopic video streaming, (b) monoscopic video streaming with remote multiple-perspective rendering. Our implementation experiments confirm that prototyping 3D-video streaming systems is possible with today's software and hardware. Furthermore, our current operational prototype demonstrates that highly heterogeneous clients can coexist in the system, ranging from auto-stereoscopic 3D displays to resource-constrained mobile devices.

  4. Hazardous chemical tracking system (HAZ-TRAC)

    International Nuclear Information System (INIS)

    Bramlette, J.D.; Ewart, S.M.; Jones, C.E.

    1990-07-01

    Westinghouse Idaho Nuclear Company, Inc. (WINCO) developed and implemented a computerized hazardous chemical tracking system, referred to as Haz-Trac, for use at the Idaho Chemical Processing Plant (ICPP). Haz-Trac is designed to provide a means to improve the accuracy and reliability of chemical information, which enhances the overall quality and safety of ICPP operations. The system tracks all chemicals and chemical components from the time they enter the ICPP until the chemical changes form, is used, or becomes a waste. The system runs on a Hewlett-Packard (HP) 3000 Series 70 computer. The system is written in COBOL and uses VIEW/3000, TurboIMAGE/DBMS 3000, OMNIDEX, and SPEEDWARE. The HP 3000 may be accessed throughout the ICPP, and from remote locations, using data communication lines. Haz-Trac went into production in October, 1989. Currently, over 1910 chemicals and chemical components are tracked on the system. More than 2500 personnel hours were saved during the first six months of operation. Cost savings have been realized by reducing the time needed to collect and compile reporting information, identifying and disposing of unneeded chemicals, and eliminating duplicate inventories. Haz-Trac maintains information required by the Superfund Amendment Reauthorization Act (SARA), the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) and the Occupational Safety and Health Administration (OSHA)

  5. Hazardous chemical tracking system (HAZ-TRAC)

    Energy Technology Data Exchange (ETDEWEB)

    Bramlette, J D; Ewart, S M; Jones, C E

    1990-07-01

    Westinghouse Idaho Nuclear Company, Inc. (WINCO) developed and implemented a computerized hazardous chemical tracking system, referred to as Haz-Trac, for use at the Idaho Chemical Processing Plant (ICPP). Haz-Trac is designed to provide a means to improve the accuracy and reliability of chemical information, which enhances the overall quality and safety of ICPP operations. The system tracks all chemicals and chemical components from the time they enter the ICPP until the chemical changes form, is used, or becomes a waste. The system runs on a Hewlett-Packard (HP) 3000 Series 70 computer. The system is written in COBOL and uses VIEW/3000, TurboIMAGE/DBMS 3000, OMNIDEX, and SPEEDWARE. The HP 3000 may be accessed throughout the ICPP, and from remote locations, using data communication lines. Haz-Trac went into production in October, 1989. Currently, over 1910 chemicals and chemical components are tracked on the system. More than 2500 personnel hours were saved during the first six months of operation. Cost savings have been realized by reducing the time needed to collect and compile reporting information, identifying and disposing of unneeded chemicals, and eliminating duplicate inventories. Haz-Trac maintains information required by the Superfund Amendment Reauthorization Act (SARA), the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) and the Occupational Safety and Health Administration (OSHA).

  6. Visual analysis of trash bin processing on garbage trucks in low resolution video

    Science.gov (United States)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  7. Whose track is it anyway?

    DEFF Research Database (Denmark)

    Flora, Janne; Andersen, Astrid Oberborbeck

    2017-01-01

    tracked their hunting routes, registered animals caught and observed, and photographed and videoed important places, events, and other phenomena they found interesting and relevant to register. This essay describes the conception and implementation of Piniariarneq, and uses this experience as a lens...

  8. Video-based data acquisition system for use in eye blink classical conditioning procedures in sheep.

    Science.gov (United States)

    Nation, Kelsey; Birge, Adam; Lunde, Emily; Cudd, Timothy; Goodlett, Charles; Washburn, Shannon

    2017-10-01

    Pavlovian eye blink conditioning (EBC) has been extensively studied in humans and laboratory animals, providing one of the best-understood models of learning in neuroscience. EBC has been especially useful in translational studies of cerebellar and hippocampal function. We recently reported a novel extension of EBC procedures for use in sheep, and now describe new advances in a digital video-based system. The system delivers paired presentations of conditioned stimuli (CSs; a tone) and unconditioned stimuli (USs; an air puff to the eye), or CS-alone "unpaired" trials. This system tracks the linear distance between the eyelids to identify blinks occurring as either unconditioned (URs) or conditioned (CRs) responses, to a resolution of 5 ms. A separate software application (Eye Blink Reviewer) is used to review and autoscore the trial CRs and URs, on the basis of a set of predetermined rules, permitting an operator to confirm (or rescore, if needed) the autoscore results, thereby providing quality control for accuracy of scoring. Learning curves may then be quantified in terms of the frequencies of CRs over sessions, both on trials with paired CS-US presentations and on CS-alone trials. The latency to CR onset, latency to CR peak, and occurrence of URs are also obtained. As we demonstrated in two example cases, this video-based system provides efficient automated means to conduct EBC in sheep and can facilitate fully powered studies with multigroup designs that involve paired and unpaired training. This can help extend new studies in sheep, a species well suited for translational studies of neurodevelopmental disorders resulting from gestational exposure to drugs, toxins, or intrauterine distress.

  9. Active Multimodal Sensor System for Target Recognition and Tracking.

    Science.gov (United States)

    Qu, Yufu; Zhang, Guirong; Zou, Zhaofan; Liu, Ziyue; Mao, Jiansen

    2017-06-28

    High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system.

  10. Robust tracking control of uncertain Duffing-Holmes control systems

    International Nuclear Information System (INIS)

    Sun, Y.-J.

    2009-01-01

    In this paper, the notion of virtual stabilizability for dynamical systems is introduced and the virtual stabilizability of uncertain Duffing-Holmes control systems is investigated. Based on the time-domain approach with differential inequality, a tracking control is proposed such that the states of uncertain Duffing-Holmes control system track the desired trajectories with any pre-specified exponential decay rate and convergence radius. Moreover, we present an algorithm to find such a tracking control. Finally, a numerical example is provided to illustrate the use of the main results.

  11. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    Science.gov (United States)

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  12. A remote educational system in medicine using digital video.

    Science.gov (United States)

    Hahm, Joon Soo; Lee, Hang Lak; Kim, Sun Il; Shimizu, Shuji; Choi, Ho Soon; Ko, Yong; Lee, Kyeong Geun; Kim, Tae Eun; Yun, Ji Won; Park, Yong Jin; Naoki, Nakashima; Koji, Okamura

    2007-03-01

    Telemedicine has opened the door to a wide range of learning experience and simultaneous feedback to doctors and students at various remote locations. However, there are limitations such as lack of approved international standards of ethics. The aim of our study was to establish a telemedical education system through the development of high quality images, using the digital transfer system on a high-speed network. Using telemedicine, surgical images can be sent not only to domestic areas but also abroad, and opinions regarding surgical procedures can be exchanged between the operation room and a remote place. The Asia Pacific Information Infrastrucuture (APII) link, a submarine cable between Busan and Fukuoka, was used to connect Korea with Japan, and Korea Advanced Research Network (KOREN) was used to connect Busan with Seoul. Teleconference and video streaming between Hanyang University Hospital in Seoul and Kyushu University Hospital in Japan were realized using Digital Video Transfer System (DVTS) over Ipv4 network. Four endoscopic surgeries were successfully transmitted between Seoul and Kyushu, while concomitant teleconferences took place between the two throughout the operations. Enough bandwidth of 60 Mbps could be kept for two-line transmissions. The quality of transmitted video image had no frame loss with a rate of 30 images per second. The sound was also clear, and time delay was less than 0.3 sec. Our experience has demonstrated the feasibility of domestic and international telemedicine. We have established an international medical network with high-quality video transmission over Internet protocol, which is easy to perform, reliable, and economical. Our network system may become a promising tool for worldwide telemedical communication in the future.

  13. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  14. Two axes sun tracking system with PLC control

    International Nuclear Information System (INIS)

    Abdallah, Salah; Nijmeh, Salem

    2004-01-01

    In this paper, an electromechanical, two axes sun tracking system is designed and constructed. The programming method of control with an open loop system is employed where the programmable logic controller is used to control the motion of the sun tracking surface. An experimental study was performed to investigate the effect of using two axes tracking on the solar energy collected. The collected energy was measured and compared with that on a fixed surface tilted at 32 deg. towards the south. The results indicate that the measured collected solar energy on the moving surface was significantly larger than that on a fixed surface. The two axes tracking surface showed a better performance with an increase in the collected energy of up to 41.34% compared with the fixed surface

  15. Adaptive and accelerated tracking-learning-detection

    Science.gov (United States)

    Guo, Pengyu; Li, Xin; Ding, Shaowen; Tian, Zunhua; Zhang, Xiaohu

    2013-08-01

    An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object motion to lessen the detector's searching range and the fixed number of positive and negative samples that ensures a constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in order to obtain a better effect, some TLD's details are redesigned, which uses a weight including both normalized correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.

  16. PAMTRAK: A personnel and material tracking system

    International Nuclear Information System (INIS)

    Anspach, D.A.; Anspach, J.P.; Crain, B. Jr.

    1996-01-01

    There is a need for an automated system for protecting and monitoring sensitive or classified parts and material. Sandia has developed a real-time personnel and material tracking system (PAMTRAK) that has been installed at selected DOE facilities. It safeguards sensitive parts and material by tracking tags worn by personnel and by monitoring sensors attached to the parts or material. It includes remote control and alarm display capabilities and a complementary program in Keyhole to display measured material attributes remotely. This paper describes the design goals, the system components, current installations, and the benefits a site can expect when using PAMTRAK

  17. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  18. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  19. Integrating IPix immersive video surveillance with unattended and remote monitoring (UNARM) systems

    International Nuclear Information System (INIS)

    Michel, K.D.; Klosterbuer, S.F.; Langner, D.C.

    2004-01-01

    Commercially available IPix cameras and software are being researched as a means by which an inspector can be virtually immersed into a nuclear facility. A single IPix camera can provide 360 by 180 degree views with full pan-tilt-zoom capability, and with no moving parts on the camera mount. Immersive video technology can be merged into the current Unattended and Remote Monitoring (UNARM) system, thereby providing an integrated system of monitoring capabilities that tie together radiation, video, isotopic analysis, Global Positioning System (GPS), etc. The integration of the immersive video capability with other monitoring methods already in place provides a significantly enhanced situational awareness to the International Atomic Energy Agency (IAEA) inspectors.

  20. Interactive video audio system: communication server for INDECT portal

    Science.gov (United States)

    Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem

    2014-05-01

    The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.

  1. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  2. On-line video image processing system for real-time neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Fujine, S; Yoneda, K; Kanda, K [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.

    1983-09-15

    The neutron radiography system installed at the E-2 experimental hole of the KUR (Kyoto University Reactor) has been used for some NDT applications in the nuclear field. The on-line video image processing system of this facility is introduced in this paper. A 0.5 mm resolution in images was obtained by using a super high quality TV camera developed for X-radiography viewing a NE-426 neutron-sensitive scintillator. The image of the NE-426 on a CRT can be observed directly and visually, thus many test samples can be sequentially observed when necessary for industrial purposes. The video image signals from the TV camera are digitized, with a 33 ms delay, through a video A/D converter (ADC) and can be stored in the image buffer (32 KB DRAM) of a microcomputer (Z-80) system. The digitized pictures are taken with 16 levels of gray scale and resolved to 240 x 256 picture elements (pixels) on a monochrome CRT, with the capability also to display 16 distinct colors on a RGB video display. The direct image of this system could be satisfactory for penetrating the side plates to test MTR type reactor fuels and for the investigation of moving objects.

  3. Understanding learning within a commercial video game: A case study

    OpenAIRE

    Fowler, Allan

    2015-01-01

    There has been an increasing interest in the debate on the value and relevance using video games for learning. Some of the interest stems from frustration with current educational methods. However, some of this interest also stems from the observations of large numbers of children that play video games. This paper finds that children can learn basic construction skills from playing a video game called World of Goo. The study also employed novel eye-tracking technology to measure endogenous ey...

  4. Target tracking system based on preliminary and precise two-stage compound cameras

    Science.gov (United States)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  5. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  6. Lung tumor tracking in fluoroscopic video based on optical flow

    International Nuclear Information System (INIS)

    Xu Qianyi; Hamilton, Russell J.; Schowengerdt, Robert A.; Alexander, Brian; Jiang, Steve B.

    2008-01-01

    Respiratory gating and tumor tracking for dynamic multileaf collimator delivery require accurate and real-time localization of the lung tumor position during treatment. Deriving tumor position from external surrogates such as abdominal surface motion may have large uncertainties due to the intra- and interfraction variations of the correlation between the external surrogates and internal tumor motion. Implanted fiducial markers can be used to track tumors fluoroscopically in real time with sufficient accuracy. However, it may not be a practical procedure when implanting fiducials bronchoscopically. In this work, a method is presented to track the lung tumor mass or relevant anatomic features projected in fluoroscopic images without implanted fiducial markers based on an optical flow algorithm. The algorithm generates the centroid position of the tracked target and ignores shape changes of the tumor mass shadow. The tracking starts with a segmented tumor projection in an initial image frame. Then, the optical flow between this and all incoming frames acquired during treatment delivery is computed as initial estimations of tumor centroid displacements. The tumor contour in the initial frame is transferred to the incoming frames based on the average of the motion vectors, and its positions in the incoming frames are determined by fine-tuning the contour positions using a template matching algorithm with a small search range. The tracking results were validated by comparing with clinician determined contours on each frame. The position difference in 95% of the frames was found to be less than 1.4 pixels (∼0.7 mm) in the best case and 2.8 pixels (∼1.4 mm) in the worst case for the five patients studied.

  7. REAL TIME TRACKING OBYEK BERGERAK DENGAN WEBCAM BERBASIS WARNA DENGAN METODE BACKGROUND SUBTRACTION

    Directory of Open Access Journals (Sweden)

    Aris Tri Jaka Harjanta

    2017-07-01

    Full Text Available Proses tracking obyek pada real time  video adalah salah satu topik yang penting dalam kajian suveillance system (Dhananjaya, Rama, and Thimmaiah 2015. deteksi dan ekstraksi informasi serta pelacakan obyek atau benda bergerak adalah sebagai salah satu bentuk aplikasi dari computer vision. Beberapa aplikasi yang memanfaatkan metode tracking object atau benda bergerak antara lain adalah UAV (Unmanned Aerial Vehicle surveillance atau lebih dikenal dengan mesin/kendaraan tak berawak, Indoor Monitoring system adalah sistem monitoring keadaan dalam ruangan, serta memonitor trafik lalu lintas yang dapat mengamati pergerakan semua benda dalam keadaan real time. Tracking obyek dalam keadaan real time banyak hal yang perlu diperhatikan dan perlu diperhitungkan dimana semua parameter dan noise atau gangguan object di sekitarnya yang tidak perlu kita amati namun berada dalam satu bagian bersama obyek yang kita amati. Dalam penelitian ini metode yang akan digunakan adalah background subtraction untuk pendeteksian serta tracking obyek dan benda bergerak secara real time berbasis warna dengan memanfaatkan kamera webcam dan menggunakan pustaka opensource OpenCv.

  8. Tracking flow of leukocytes in blood for drug analysis

    Science.gov (United States)

    Basharat, Arslan; Turner, Wesley; Stephens, Gillian; Badillo, Benjamin; Lumpkin, Rick; Andre, Patrick; Perera, Amitha

    2011-03-01

    Modern microscopy techniques allow imaging of circulating blood components under vascular flow conditions. The resulting video sequences provide unique insights into the behavior of blood cells within the vasculature and can be used as a method to monitor and quantitate the recruitment of inflammatory cells at sites of vascular injury/ inflammation and potentially serve as a pharmacodynamic biomarker, helping screen new therapies and individualize dose and combinations of drugs. However, manual analysis of these video sequences is intractable, requiring hours per 400 second video clip. In this paper, we present an automated technique to analyze the behavior and recruitment of human leukocytes in whole blood under physiological conditions of shear through a simple multi-channel fluorescence microscope in real-time. This technique detects and tracks the recruitment of leukocytes to a bioactive surface coated on a flow chamber. Rolling cells (cells which partially bind to the bioactive matrix) are detected counted, and have their velocity measured and graphed. The challenges here include: high cell density, appearance similarity, and low (1Hz) frame rate. Our approach performs frame differencing based motion segmentation, track initialization and online tracking of individual leukocytes.

  9. Virtual Video Prototyping for Healthcare Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Bossen, Claus; Lykke-Olesen, Andreas

    2002-01-01

    Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...... issues, since one cannot avoid paying attention to the physical, real-world constraints and to details in the usage-interaction between users and technology. From the users' perspective, during our evaluation of the virtual video prototype, we experienced how it enabled users to relate...

  10. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  11. Effects of the pyrethroid insecticide Cypermethrin on the locomotor activity of the wolf spider Pardosa amentata: quantitative analysis employing computer-automated video tracking

    DEFF Research Database (Denmark)

    Baatrup, E; Bayley, M

    1993-01-01

    Pardosa amentata was quantified in an open field setup, using computer-automated video tracking. Each spider was recorded for 24 hr prior to pesticide exposure. After topical application of 4.6 ng of Cypermethrin, the animal was recorded for a further 48 hr. Finally, after 9 days of recovery, the spider...... paresis, the effects of Cypermethrin were evident in reduced path length, average velocity, and maximum velocity and an increase in the time spent in quiescence. Also, the pyrethroid disrupted the consistent distributions of walking velocity and periods of quiescence seen prior to pesticide application...

  12. Launch vehicle tracking enhancement through Global Positioning System Metric Tracking

    Science.gov (United States)

    Moore, T. C.; Li, Hanchu; Gray, T.; Doran, A.

    United Launch Alliance (ULA) initiated operational flights of both the Atlas V and Delta IV launch vehicle families in 2002. The Atlas V and Delta IV launch vehicles were developed jointly with the US Air Force (USAF) as part of the Evolved Expendable Launch Vehicle (EELV) program. Both Launch Vehicle (LV) families have provided 100% mission success since their respective inaugural launches and demonstrated launch capability from both Vandenberg Air Force Base (VAFB) on the Western Test Range and Cape Canaveral Air Force Station (CCAFS) on the Eastern Test Range. However, the current EELV fleet communications, tracking, & control architecture & technology, which date back to the origins of the space launch business, require support by a large and high cost ground footprint. The USAF has embarked on an initiative known as Future Flight Safety System (FFSS) that will significantly reduce Test Range Operations and Maintenance (O& M) cost by closing facilities and decommissioning ground assets. In support of the FFSS, a Global Positioning System Metric Tracking (GPS MT) System based on the Global Positioning System (GPS) satellite constellation has been developed for EELV which will allow both Ranges to divest some of their radar assets. The Air Force, ULA and Space Vector have flown the first 2 Atlas Certification vehicles demonstrating the successful operation of the GPS MT System. The first Atlas V certification flight was completed in February 2012 from CCAFS, the second Atlas V certification flight from VAFB was completed in September 2012 and the third certification flight on a Delta IV was completed October 2012 from CCAFS. The GPS MT System will provide precise LV position, velocity and timing information that can replace ground radar tracking resource functionality. The GPS MT system will provide an independent position/velocity S-Band telemetry downlink to support the current man-in-the-loop ground-based commanded destruct of an anomalous flight- The system

  13. Advanced alignment of the ATLAS tracking system

    CERN Document Server

    AUTHOR|(CDS)2085334; The ATLAS collaboration

    2016-01-01

    In order to reconstruct the trajectories of charged particles, the ATLAS experiment exploits a tracking system built using different technologies, silicon planar modules or microstrips (PIX and SCT detectors) and gaseous drift tubes (TRT), all embedded in a 2T solenoidal magnetic field. Misalignments of the active detector elements and deformations of the structures (which can lead to \\textit{Weak Modes}) deteriorate resolution of the track reconstruction and lead to systematic biases on the measured track parameters. The applied alignment procedures exploit various advanced techniques in order to minimise track-hit residuals and remove detector deformations. For the LHC Run II, the Pixel Detector has been refurbished and upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL).

  14. Video-Guidance Design for the DART Rendezvous Mission

    Science.gov (United States)

    Ruth, Michael; Tracy, Chisholm

    2004-01-01

    NASA's Demonstration of Autonomous Rendezvous Technology (DART) mission will validate a number of different guidance technologies, including state-differenced GPS transfers and close-approach video guidance. The video guidance for DART will employ NASA/Marshall s Advanced Video Guidance Sensor (AVGS). This paper focuses on the terminal phase of the DART mission that includes close-approach maneuvers under AVGS guidance. The closed-loop video guidance design for DART is driven by a number of competing requirements, including a need for maximizing tracking bandwidths while coping with measurement noise and the need to minimize RCS firings. A range of different strategies for attitude control and docking guidance have been considered for the DART mission, and design decisions are driven by a goal of minimizing both the design complexity and the effects of video guidance lags. The DART design employs an indirect docking approach, in which the guidance position targets are defined using relative attitude information. Flight simulation results have proven the effectiveness of the video guidance design.

  15. Geometric accuracy of a novel gimbals based radiation therapy tumor tracking system.

    Science.gov (United States)

    Depuydt, Tom; Verellen, Dirk; Haas, Olivier; Gevaert, Thierry; Linthout, Nadine; Duchateau, Michael; Tournel, Koen; Reynders, Truus; Leysen, Katrien; Hoogeman, Mischa; Storme, Guy; De Ridder, Mark

    2011-03-01

    VERO is a novel platform for image guided stereotactic body radiotherapy. Orthogonal gimbals hold the linac-MLC assembly allowing real-time moving tumor tracking. This study determines the geometric accuracy of the tracking. To determine the tracking error, an 1D moving phantom produced sinusoidal motion with frequencies up to 30 breaths per minute (bpm). Tumor trajectories of patients were reproduced using a 2D robot and pursued with the gimbals tracking system prototype. Using the moving beam light field and a digital-camera-based detection unit tracking errors, system lag and equivalence of pan/tilt performance were measured. The system lag was 47.7 ms for panning and 47.6 ms for tilting. Applying system lag compensation, sinusoidal motion tracking was accurate, with a tracking error 90% percentile E(90%)tracking errors were below 0.14 mm. The 2D tumor trajectories were tracked with an average E(90%) of 0.54 mm, and tracking error standard deviations of 0.20 mm for pan and 0.22 mm for tilt. In terms of dynamic behavior, the gimbaled linac of the VERO system showed to be an excellent approach for providing accurate real-time tumor tracking in radiation therapy. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. An Indoor Tracking System Based on Bluetooth Technology

    OpenAIRE

    Opoku, Samuel King

    2012-01-01

    Implementations of tracking systems have become prevalent issues in modern technology due to its advantage of location detection of objects. Objects are usually tracked using trackers based on GPS, GSM, RFID and Bluetooth signal strength implementation. These mechanisms usually require line of sight operations, limited coverage and low level programming language for accessing Bluetooth signal strength. This paper presents an alternative technique for tracking the movement of indoor objects ba...

  17. Mobile Robot Positioning by using Low-Cost Visual Tracking System

    Directory of Open Access Journals (Sweden)

    Ruangpayoongsak Niramon

    2017-01-01

    Full Text Available This paper presents an application of visual tracking system on mobile robot positioning. The proposed method is verified on a constructed low-cost tracking system consisting of 2 DOF pan-tilt unit, web camera and distance sensor. The motion of pan-tilt joints is realized and controlled by using LQR controller running on microcontroller. Without needs of camera calibration, robot trajectory is tracked by Kalman filter integrating distance information and joint positions. The experimental results demonstrate validity of the proposed positioning technique and the obtained mobile robot trajectory is benchmarked against laser rangefinder positioning. The implemented system can successfully track a mobile robot driving at 14 cm/s.

  18. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-05-01

    This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.

  19. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    Science.gov (United States)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  20. IMPLEMENTATION OF OBJECT TRACKING ALGORITHMS ON THE BASIS OF CUDA TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    B. A. Zalesky

    2014-01-01

    Full Text Available A fast version of correlation algorithm to track objects on video-sequences made by a nonstabilized camcorder is presented. The algorithm is based on comparison of local correlations of the object image and regions of video-frames. The algorithm is implemented in programming technology CUDA. Application of CUDA allowed to attain real time execution of the algorithm. To improve its precision and stability, a robust version of the Kalman filter has been incorporated into the flowchart. Tests showed applicability of the algorithm to practical object tracking.

  1. Inexpensive remote video surveillance system with microcomputer and solar cells

    International Nuclear Information System (INIS)

    Guevara Betancourt, Edder

    2013-01-01

    A low-cost prototype is developed with a RPI plate for remote video surveillance. Additionally, the theoretical basis to provide energy independence have developed through solar cells and a battery bank. Some existing commercial monitoring systems are studied and analyzed, components such as: cameras, communication devices (WiFi and 3G), free software packages for video surveillance, control mechanisms and theory remote photovoltaic systems. A number of steps are developed to implement the module and install, configure and test each of the elements of hardware and software that make up the module, exploring the feasibility of providing intelligence to the system using the software chosen. Events that have been generated by motion detection have been simple, intuitive way to view, archive and extract. The implementation of the module by a microcomputer video surveillance and motion detection software (Zoneminder) has been an option for a lot of potential; as the platform for monitoring and recording data has provided all the tools to make a robust and secure surveillance. (author) [es

  2. Initial validations for pursuing irradiation using a gimbals tracking system

    International Nuclear Information System (INIS)

    Takayama, Kenji; Mizowaki, Takashi; Kokubo, Masaki; Kawada, Noriyuki; Nakayama, Hiroshi; Narita, Yuichiro; Nagano, Kazuo; Kamino, Yuichiro; Hiraoka, Masahiro

    2009-01-01

    Our newly designed image-guided radiotherapy (IGRT) system enables the dynamic tracking irradiation with a gimbaled X-ray head and a dual on-board kilovolt imaging subsystem for real-time target localization. Examinations using a computer-controlled three-dimensionally movable phantom demonstrated that our gimbals tracking system significantly reduced motion blurring effects in the dose distribution compared to the non-tracking state.

  3. Distributed formation tracking using local coordinate systems

    DEFF Research Database (Denmark)

    Yang, Qingkai; Cao, Ming; Garcia de Marina, Hector

    2018-01-01

    This paper studies the formation tracking problem for multi-agent systems, for which a distributed estimator–controller scheme is designed relying only on the agents’ local coordinate systems such that the centroid of the controlled formation tracks a given trajectory. By introducing a gradient...... descent term into the estimator, the explicit knowledge of the bound of the agents’ speed is not necessary in contrast to existing works, and each agent is able to compute the centroid of the whole formation in finite time. Then, based on the centroid estimation, a distributed control algorithm...

  4. Real-time video streaming system for LHD experiment using IP multicast

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Yamamoto, Takashi; Yoshida, Masanobu; Nagayama, Yoshio; Hasegawa, Makoto

    2009-01-01

    In order to accomplish smooth cooperation research, remote participation plays an important role. For this purpose, the authors have been developing various applications for remote participation for the LHD (Large Helical Device) experiments, such as Web interface for visualization of acquired data. The video streaming system is one of them. It is useful to grasp the status of the ongoing experiment remotely, and we provide the video images displayed in the control room to the remote users. However, usual streaming servers cannot send video images without delay. The delay changes depending on how to send the images, but even a little delay might become critical if the researchers use the images to adjust the diagnostic devices. One of the main causes of delay is the procedure of compressing and decompressing the images. Furthermore, commonly used video compression method is lossy; it removes less important information to reduce the size. However, lossy images cannot be used for physical analysis because the original information is lost. Therefore, video images for remote participation should be sent without compression in order to minimize the delay and to supply high quality images durable for physical analysis. However, sending uncompressed video images requires large network bandwidth. For example, sending 5 frames of 16bit color SXGA images a second requires 100Mbps. Furthermore, the video images must be sent to several remote sites simultaneously. It is hard for a server PC to handle such a large data. To cope with this problem, the authors adopted IP multicast to send video images to several remote sites at once. Because IP multicast packets are sent only to the network on which the clients want the data; the load of the server does not depend on the number of clients and the network load is reduced. In this paper, the authors discuss the feasibility of high bandwidth video streaming system using IP multicast. (author)

  5. Data compression systems for home-use digital video recording

    NARCIS (Netherlands)

    With, de P.H.N.; Breeuwer, M.; van Grinsven, P.A.M.

    1992-01-01

    The authors focus on image data compression techniques for digital recording. Image coding for storage equipment covers a large variety of systems because the applications differ considerably in nature. Video coding systems suitable for digital TV and HDTV recording and digital electronic still

  6. The design of red-blue 3D video fusion system based on DM642

    Science.gov (United States)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  7. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    Science.gov (United States)

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  8. Authenticated tracking and monitoring system (ATMS) tracking shipments from an Australian uranium mine

    International Nuclear Information System (INIS)

    Schoeneman, J.L.

    1998-01-01

    The Authenticated Tracking and Monitoring System (ATMS) answers the need for global monitoring of the status and location of sensitive items on a worldwide basis, 24 hours a day. ATMS uses wireless sensor packs to monitor the status of the items and environmental conditions. A receiver and processing unit collect a variety of sensor event data. The collected data are transmitted to the INMARSAT satellite communication system, which then sends the data to appropriate ground stations. Authentication and encryption algorithms secure the data during communication activities. A typical ATMS application would be to track and monitor the safety and security of a number of items in transit along a scheduled shipping route. The resulting tracking, timing, and status information could then be processed to ensure compliance with various agreements. Following discussions between the Australian Safeguards Office (ASO), the US Department of Energy (DOE), and Sandia National Laboratories (SNL) in early 1995, the parties mutually agreed to conduct and evaluate a field trial prototype ATMS to track and monitor shipments of uranium ore concentrate (UOC) from an operating uranium mine in Australia to a final destination in Rotterdam, the Netherlands, with numerous stops along the way. During the months of February and March 1998, the trial was conducted on a worldwide basis, with tracking and monitoring stations located at sites in both Australia and the US. This paper describes ATMS and the trial

  9. Tracking strategy for photovoltaic solar systems in high latitudes

    International Nuclear Information System (INIS)

    Quesada, Guillermo; Guillon, Laura; Rousse, Daniel R.; Mehrtash, Mostafa; Dutil, Yvan; Paradis, Pierre-Luc

    2015-01-01

    Highlights: • In cloudy conditions tracking the sun is ineffective. • A methodology to estimate a theoretical threshold for solar tracking was developed. • A tracking strategy to maximize electricity production was proposed. - Abstract: Several studies show that from about 20% to 50% more solar energy can be recovered by using photovoltaic systems that track the sun rather than systems set at a fixed angle. For overcast or cloudy days, recent studies propose the use of a set position in which each photovoltaic panel faces toward the zenith (horizontal position). Compared to a panel that follows the sun’s path, this approach claims that a horizontal panel increases the amount of solar radiation captured and subsequently the quantity of electricity produced. The present work assesses a solar tracking photovoltaic panel hourly and seasonally in high latitudes. A theoretical method based on an isotropic sky model was formulated, implemented, and used in a case study analysis of a grid-connected photovoltaic system in Montreal, Canada. The results obtained, based on the definition of a critical hourly global solar radiation, were validated numerically and experimentally. The study confirmed that a zenith-set sun tracking strategy for overcast or mostly cloudy days in summer is not advantageous

  10. Transcom's next move: Improvements to DOE's transportation satellite tracking systems

    International Nuclear Information System (INIS)

    Harmon, L.H.; Harris, A.D. III; Driscoll, K.L.; Ellis, L.G.

    1990-01-01

    In today's society, the use of satellites is becoming the state-of-the-art method of tracking shipments. The United States Department of Energy (US DOE) has advanced technology in this area with its transportation tracking and communications system, TRANSCOM, which has been in operation for over one year. TRANSCOM was developed by DOE to monitor selected, unclassified shipments of radioactive materials across the country. With the latest technology in satellite communications, Long Range Navigation (Loran), and computer networks, TRANSCOM tracks shipments in near-real time, disseminates information on each shipment to authorized users of the system, and offers two-way communications between vehicle operators and TRANSCOM users anywhere in the country. TRANSCOM's successful tracking record, during fiscal year 1989, includes shipments of spent fuel, cesium, uranium hexafluoride, and demonstration shipments for the Waste Isolation Pilot Plant (WIPP). Plans for fiscal year 1990 include tracking additional shipments, implementing system enhancements designed to meet the users' needs, and continuing to research the technology of tracking systems so that TRANSCOM can provide its users with the newest technology available in satellite communications. 3 refs., 1 fig

  11. Kinect-Based Moving Human Tracking System with Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Abdel Mehsen Ahmad

    2017-04-01

    Full Text Available This paper is an extension of work originally presented and published in IEEE International Multidisciplinary Conference on Engineering Technology (IMCET. This work presents a design and implementation of a moving human tracking system with obstacle avoidance. The system scans the environment by using Kinect, a 3D sensor, and tracks the center of mass of a specific user by using Processing, an open source computer programming language. An Arduino microcontroller is used to drive motors enabling it to move towards the tracked user and avoid obstacles hampering the trajectory. The implemented system is tested under different lighting conditions and the performance is analyzed using several generated depth images.

  12. Eye tracking in user experience design

    CERN Document Server

    Romano Bergstorm, Jennifer

    2014-01-01

    Eye Tracking for User Experience Design explores the many applications of eye tracking to better understand how users view and interact with technology. Ten leading experts in eye tracking discuss how they have taken advantage of this new technology to understand, design, and evaluate user experience. Real-world stories are included from these experts who have used eye tracking during the design and development of products ranging from information websites to immersive games. They also explore recent advances in the technology which tracks how users interact with mobile devices, large-screen displays and video game consoles. Methods for combining eye tracking with other research techniques for a more holistic understanding of the user experience are discussed. This is an invaluable resource to those who want to learn how eye tracking can be used to better understand and design for their users. * Includes highly relevant examples and information for those who perform user research and design interactive experi...

  13. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  14. Localization of cask and plug remote handling system in ITER using multiple video cameras

    International Nuclear Information System (INIS)

    Ferreira, João; Vale, Alberto; Ribeiro, Isabel

    2013-01-01

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building

  15. Research of real-time video processing system based on 6678 multi-core DSP

    Science.gov (United States)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  16. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    Science.gov (United States)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  17. Particle Tracking Model (PTM) with Coastal Modeling System (CMS)

    Science.gov (United States)

    2015-11-04

    Coastal Inlets Research Program Particle Tracking Model (PTM) with Coastal Modeling System ( CMS ) The Particle Tracking Model (PTM) is a Lagrangian...currents and waves. The Coastal Inlets Research Program (CIRP) supports the PTM with the Coastal Modeling System ( CMS ), which provides coupled wave...and current forcing for PTM simulations. CMS -PTM is implemented in the Surface-water Modeling System, a GUI environment for input development

  18. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Directory of Open Access Journals (Sweden)

    Chen Homer H

    2007-01-01

    Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  19. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Science.gov (United States)

    Lu, Meng-Ting; Yao, Jason J.; Chen, Homer H.

    2007-12-01

    The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  20. Measuring energy expenditure in sports by thermal video analysis

    DEFF Research Database (Denmark)

    Gade, Rikke; Larsen, Ryan Godsk; Moeslund, Thomas B.

    2017-01-01

    Estimation of human energy expenditure in sports and exercise contributes to performance analyses and tracking of physical activity levels. The focus of this work is to develop a video-based method for estimation of energy expenditure in athletes. We propose a method using thermal video analysis...... to automatically extract the cyclic motion pattern, in walking and running represented as steps, and analyse the frequency. Experiments are performed with one subject in two different tests, each at 5, 8, 10, and 12 km/h. The results of our proposed video-based method is compared to concurrent measurements...

  1. [Telemedicine with digital video transport system].

    Science.gov (United States)

    Hahm, Joon Soo; Shimizu, Shuji; Nakashima, Naoki; Byun, Tae Jun; Lee, Hang Lak; Choi, Ho Soon; Ko, Yong; Lee, Kyeong Geun; Kim, Sun Il; Kim, Tae Eun; Yun, Jiwon; Park, Yong Jin

    2004-06-01

    The growth of technology based on internet protocol has affected on the informatics and automatic controls of medical fields. The aim of this study was to establish the telemedical educational system by developing the high quality image transfer using the DVTS (digital video transmission system) on the high-speed internet network. Using telemedicine, we were able to send surgical images not only to domestic areas but also to international area. Moreover, we could discuss the condition of surgical procedures in the operation room and seminar room. The Korean-Japan cable network (KJCN) was structured in the submarine between Busan and Fukuoka. On the other hand, the Korea advanced research network (KOREN) was used to connect between Busan and Seoul. To link the image between the Hanyang University Hospital in Seoul and Kyushu University Hospital in Japan, we started teleconference system and recorded image-streaming system with DVTS on the circumstance with IPv4 network. Two operative cases were transmitted successfully. We could keep enough bandwidth of 60 Mbps for two-line transmission. The quality of transmitted moving image had no frame loss with the rate 30 per second. The sound was also clear and the time delay was less than 0.3 sec. Our study has demonstrated the feasibility of domestic and international telemedicine. We have established an international medical network with high-quality video transmission over internet protocol. It is easy to perform, reliable, and also economical. Thus, it will be a promising tool in remote medicine for worldwide telemedical communication in the future.

  2. Tracking Control of Nonlinear Mechanical Systems

    NARCIS (Netherlands)

    Lefeber, A.A.J.

    2000-01-01

    The subject of this thesis is the design of tracking controllers for certain classes of mechanical systems. The thesis consists of two parts. In the first part an accurate mathematical model of the mechanical system under consideration is assumed to be given. The goal is to follow a certain

  3. A review of video security training and assessment-systems and their applications

    International Nuclear Information System (INIS)

    Cellucci, J.; Hall, R.J.

    1991-01-01

    This paper reports that during the last 10 years computer-aided video data collection and playback systems have been used as nuclear facility security training and assessment tools with varying degrees of success. These mobile systems have been used by trained security personnel for response force training, vulnerability assessment, force-on-force exercises and crisis management. Typically, synchronous recordings from multiple video cameras, communications audio, and digital sensor inputs; are played back to the exercise participants and then edited for training and briefing. Factors that have influence user acceptance include: frequency of use, the demands placed on security personnel, fear of punishment, user training requirements and equipment cost. The introduction of S-VHS video and new software for scenario planning, video editing and data reduction; should bring about a wider range of security applications and supply the opportunity for significant cost sharing with other user groups

  4. Secure Video Surveillance System (SVSS) for unannounced safeguards inspections

    International Nuclear Information System (INIS)

    Galdoz, Erwin G.; Pinkalla, Mark

    2010-01-01

    The Secure Video Surveillance System (SVSS) is a collaborative effort between the U.S. Department of Energy (DOE), Sandia National Laboratories (SNL), and the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC). The joint project addresses specific requirements of redundant surveillance systems installed in two South American nuclear facilities as a tool to support unannounced inspections conducted by ABACC and the International Atomic Energy Agency (IAEA). The surveillance covers the critical time (as much as a few hours) between the notification of an inspection and the access of inspectors to the location in facility where surveillance equipment is installed. ABACC and the IAEA currently use the EURATOM Multiple Optical Surveillance System (EMOSS). This outdated system is no longer available or supported by the manufacturer. The current EMOSS system has met the project objective; however, the lack of available replacement parts and system support has made this system unsustainable and has increased the risk of an inoperable system. A new system that utilizes current technology and is maintainable is required to replace the aging EMOSS system. ABACC intends to replace one of the existing ABACC EMOSS systems by the Secure Video Surveillance System. SVSS utilizes commercial off-the shelf (COTS) technologies for all individual components. Sandia National Laboratories supported the system design for SVSS to meet Safeguards requirements, i.e. tamper indication, data authentication, etc. The SVSS consists of two video surveillance cameras linked securely to a data collection unit. The collection unit is capable of retaining historical surveillance data for at least three hours with picture intervals as short as 1sec. Images in .jpg format are available to inspectors using various software review tools. SNL has delivered two SVSS systems for test and evaluation at the ABACC Safeguards Laboratory. An additional 'proto-type' system remains

  5. Evaluation of environmental commitment tracking systems for use at CDOT.

    Science.gov (United States)

    2011-10-01

    "The purpose of this study is to review existing Environmental Tracking Systems (ETSs) used by other, : select state Departments of Transportation (DOTs), as well as the existing Environmental Commitment : Tracking System (ECTS) currently in use by C...

  6. Grants Reporting and Tracking System (GRTS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Grants Reporting and Tracking System (GRTS) is the primary tool for management and oversight of EPA's Nonpoint Source (NPS) Pollution Control Program. GRTS pulls...

  7. Tracking Accuracy of a Real-Time Fiducial Tracking System for Patient Positioning and Monitoring in Radiation Therapy

    International Nuclear Information System (INIS)

    Shchory, Tal; Schifter, Dan; Lichtman, Rinat; Neustadter, David; Corn, Benjamin W.

    2010-01-01

    Purpose: In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. Methods and Materials: The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive tracking system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. Results: The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. Conclusions: This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy.

  8. Quality Assurance Tracking System - R7 (QATS-R7)

    Data.gov (United States)

    U.S. Environmental Protection Agency — This is metadata documentation for the Quality Assurance Tracking System - R7, an EPA Region 7 resource that tracks information on quality assurance reviews. Also...

  9. Pamela tracking system status report

    CERN Document Server

    Taccetti, F; Bonechi, L; Bongi, M; Boscherini, M; Castellini, G; D'Alessandro, R; Gabbanini, A; Grandi, M; Papini, P; Piccardi, S; Ricciarini, S; Spillantini, P; Straulino, S; Tesi, M; Vannuccini, E

    2002-01-01

    The Pamela apparatus will be launched at the end of 2002 on board of the Resurs DK Russian satellite. The tracking system, composed of six planes of silicon sensors inserted inside a permanent magnetic field was intensively tested during these last years. Results of tests have shown a good signal-to-noise ratio and an excellent spatial resolution, which should allow to measure the antiproton flux in an energy range from 80 MeV up to 190 GeV. The production of the final detector modules is about to start and mechanical and thermal tests on the tracking tower are being performed according to the specifications of the Russian launcher and satellite.

  10. An auxiliary frequency tracking system for general purpose lock-in amplifiers

    Science.gov (United States)

    Xie, Kai; Chen, Liuhao; Huang, Anfeng; Zhao, Kai; Zhang, Hanlu

    2018-04-01

    Lock-in amplifiers (LIAs) are designed to measure weak signals submerged by noise. This is achieved with a signal modulator to avoid low-frequency noise and a narrow-band filter to suppress out-of-band noise. In asynchronous measurement, even a slight frequency deviation between the modulator and the reference may lead to measurement error because the filter’s passband is not flat. Because many commercial LIAs are unable to track frequency deviations, in this paper we propose an auxiliary frequency tracking system. We analyze the measurement error caused by the frequency deviation and propose both a tracking method and an auto-tracking system. This approach requires only three basic parameters, which can be obtained from any general purpose LIA via its communications interface, to calculate the frequency deviation from the phase difference. The proposed auxiliary tracking system is designed as a peripheral connected to the LIA’s serial port, removing the need for an additional power supply. The test results verified the effectiveness of the proposed system; the modified commercial LIA (model SR-850) was able to track the frequency deviation and continuous drift. For step frequency deviations, a steady tracking error of less than 0.001% was achieved within three adjustments, and the worst tracking accuracy was still better than 0.1% for a continuous frequency drift. The tracking system can be used to expand the application scope of commercial LIAs, especially for remote measurements in which the modulation clock and the local reference are separated.

  11. Perspective : component tracking on the Nova system

    International Nuclear Information System (INIS)

    MacDonald, S.

    1999-01-01

    The issue of introducing Component Tracking as a service to natural gas producers, shippers and straddle plant operators was discussed. Approximately 39 companies in the industry were contacted by consultants at Nova Gas Transmission in an effort to assess if introducing this service would add value to individual producers. The numerous implications that may have to be dealt with if Component Tracking is introduced were also described. Component Tracking would provide an equitable approach to the allocation of molecules in the gas stream, and could provide producers with the ability to avoid capital outlay in field plants by alternatively contracting for recovery of the liquids at the straddle plants. Component Tracking is to be voluntary and each shipper would be able to decide whether to utilize the service at each of their receipt points onto the Nova system

  12. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  13. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    Science.gov (United States)

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal

  14. A novel track imaging system as a range counter

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z. [National Institute of Radiological Sciences (Japan); Matsufuji, N. [National Institute of Radiological Sciences (Japan); Tokyo Institute of Technology (Japan); Kanayama, S. [Chiba University (Japan); Ishida, A. [National Institute of Radiological Sciences (Japan); Tokyo Institute of Technology (Japan); Kohno, T. [Tokyo Institute of Technology (Japan); Koba, Y.; Sekiguchi, M.; Kitagawa, A.; Murakami, T. [National Institute of Radiological Sciences (Japan)

    2016-05-01

    An image-intensified, camera-based track imaging system has been developed to measure the tracks of ions in a scintillator block. To study the performance of the detector unit in the system, two types of scintillators, a dosimetrically tissue-equivalent plastic scintillator EJ-240 and a CsI(Tl) scintillator, were separately irradiated with carbon ion ({sup 12}C) beams of therapeutic energy from HIMAC at NIRS. The images of individual ion tracks in the scintillators were acquired by the newly developed track imaging system. The ranges reconstructed from the images are reported here. The range resolution of the measurements is 1.8 mm for 290 MeV/u carbon ions, which is considered a significant improvement on the energy resolution of the conventional ΔE/E method. The detector is compact and easy to handle, and it can fit inside treatment rooms for in-situ studies, as well as satisfy clinical quality assurance purposes.

  15. Intelligent Flight Control System and Aeronautics Research at NASA Dryden

    Science.gov (United States)

    Brown, Nelson A.

    2009-01-01

    This video presentation reviews the F-15 Intelligent Flight Control System and contains clips of flight tests and aircraft performance in the areas of target tracking, takeoff and differential stabilators. Video of the APG milestone flight 1g formation is included.

  16. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  17. Developing an electronic system to manage and track emergency medications.

    Science.gov (United States)

    Hamm, Mark W; Calabrese, Samuel V; Knoer, Scott J; Duty, Ashley M

    2018-03-01

    The development of a Web-based program to track and manage emergency medications with radio frequency identification (RFID) is described. At the Cleveland Clinic, medication kit restocking records and dispense locations were historically documented using a paper record-keeping system. The Cleveland Clinic investigated options to replace the paper-based tracking logs with a Web-based program that could track the real-time location and inventory of emergency medication kits. Vendor collaboration with a board of pharmacy (BOP) compliance inspector and pharmacy personnel resulted in the creation of a dual barcoding system using medication and pocket labels. The Web-based program was integrated with a Cleveland Clinic-developed asset tracking system using active RFID tags to give the real-time location of the medication kit. The Web-based program and the asset tracking system allowed identification of kits nearing expiration or containing recalled medications. Conversion from a paper-based system to a Web-based program began in October 2013. After 119 days, data were evaluated to assess the success of the conversion. Pharmacists spent an average of 27 minutes per day approving medication kits during the postimplementation period versus 102 minutes daily using the paper-based system, representing a 74% decrease in pharmacist time spent on this task. Prospective reports are generated monthly to allow the manager to assess the expected workload and adjust staffing for the next month. Implementation of a BOP-approved Web-based system for managing and tracking emergency medications with RFID integration decreased pharmacist review time, minimized compliance risk, and increased access to real-time data. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  18. Testbeam results of the first real-time embedded tracking system with artificial retina

    Energy Technology Data Exchange (ETDEWEB)

    Neri, N., E-mail: nicola.neri@mi.infn.it; Abba, A.; Caponio, F.; Citterio, M.; Coelli, S.; Fu, J.; Merli, A.; Monti, M.; Petruzzo, M.

    2017-02-11

    We present the testbeam results of the first real-time embedded tracking system based on artificial retina algorithm. The tracking system prototype is capable of fast track reconstruction with a latency of the response below 1 μs and track parameter resolutions that are comparable with the offline results. The artificial retina algorithm was implemented in hardware in a custom data acquisition board based on commercial FPGA. The system was tested successfully using a 180 GeV/c proton beam at the CERN SPS with a maximum track rate of about 280 kHz. Online track parameters were found in good agreement with offline results and with the simulated response. - Highlights: • First real-time tracking system based on artificial retina algorithm tested on beam. • Fast track reconstruction within one microsecond latency and offline like quality. • Fast tracking algorithm implemented in commercial FPGAs.

  19. Strain measurement of abdominal aortic aneurysm with real-time 3D ultrasound speckle tracking.

    Science.gov (United States)

    Bihari, P; Shelke, A; Nwe, T H; Mularczyk, M; Nelson, K; Schmandra, T; Knez, P; Schmitz-Rixen, T

    2013-04-01

    Abdominal aortic aneurysm rupture is caused by mechanical vascular tissue failure. Although mechanical properties within the aneurysm vary, currently available ultrasound methods assess only one cross-sectional segment of the aorta. This study aims to establish real-time 3-dimensional (3D) speckle tracking ultrasound to explore local displacement and strain parameters of the whole abdominal aortic aneurysm. Validation was performed on a silicone aneurysm model, perfused in a pulsatile artificial circulatory system. Wall motion of the silicone model was measured simultaneously with a commercial real-time 3D speckle tracking ultrasound system and either with laser-scan micrometry or with video photogrammetry. After validation, 3D ultrasound data were collected from abdominal aortic aneurysms of five patients and displacement and strain parameters were analysed. Displacement parameters measured in vitro by 3D ultrasound and laser scan micrometer or video analysis were significantly correlated at pulse pressures between 40 and 80 mmHg. Strong local differences in displacement and strain were identified within the aortic aneurysms of patients. Local wall strain of the whole abdominal aortic aneurysm can be analysed in vivo with real-time 3D ultrasound speckle tracking imaging, offering the prospect of individual non-invasive rupture risk analysis of abdominal aortic aneurysms. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  20. A simultaneous localization and tracking method for a worm tracking system

    Directory of Open Access Journals (Sweden)

    Kowalski Mateusz

    2014-09-01

    Full Text Available The idea of worm tracking refers to the path analysis of Caenorhabditis elegans nematodes and is an important tool in neurobiology which helps to describe their behavior. Knowledge about nematode behavior can be applied as a model to study the physiological addiction process or other nervous system processes in animals and humans. Tracking is performed by using a special manipulator positioning a microscope with a camera over a dish with an observed individual. In the paper, the accuracy of a nematode’s trajectory reconstruction is investigated. Special attention is paid to analyzing errors that occurred during the microscope displacements. Two sources of errors in the trajectory reconstruction are shown. One is due to the difficulty in accurately measuring the microscope shift, the other is due to a nematode displacement during the microscope movement. A new method that increases path reconstruction accuracy based only on the registered sequence of images is proposed. The method Simultaneously Localizes And Tracks (SLAT the nematodes, and is robust to the positioning system displacement errors. The proposed method predicts the nematode position by using NonParametric Regression (NPR. In addition, two other methods of the SLAT problem are implemented to evaluate the NPR method. The first consists in ignoring the nematode displacement during microscope movement, and the second is based on a Kalman filter. The results suggest that the SLAT method based on nonparametric regression gives the most promising results and decreases the error of trajectory reconstruction by 25% compared with reconstruction based on data from the positioning system