WorldWideScience

Sample records for video sensor system

  1. Video sensor architecture for surveillance applications.

    Science.gov (United States)

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  2. Video Sensor Architecture for Surveillance Applications

    Directory of Open Access Journals (Sweden)

    José E. Simó

    2012-02-01

    Full Text Available This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  3. Smartphone Video Guidance Sensor for Small Satellites

    Data.gov (United States)

    National Aeronautics and Space Administration — Smartphone Video Guidance Sensor(SVGS) for Small Satellites will provide a low-cost,integrated rendezvous & proximity operations sensor system to allow an...

  4. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yasaman Samei

    2008-08-01

    Full Text Available Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN. With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture. This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  5. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    Science.gov (United States)

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  6. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  7. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Alvaro Suarez

    2012-02-01

    Full Text Available Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  8. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  9. Focal-plane change triggered video compression for low-power vision sensor systems.

    Directory of Open Access Journals (Sweden)

    Yu M Chi

    Full Text Available Video sensors with embedded compression offer significant energy savings in transmission but incur energy losses in the complexity of the encoder. Energy efficient video compression architectures for CMOS image sensors with focal-plane change detection are presented and analyzed. The compression architectures use pixel-level computational circuits to minimize energy usage by selectively processing only pixels which generate significant temporal intensity changes. Using the temporal intensity change detection to gate the operation of a differential DCT based encoder achieves nearly identical image quality to traditional systems (4dB decrease in PSNR while reducing the amount of data that is processed by 67% and reducing overall power consumption reduction of 51%. These typical energy savings, resulting from the sparsity of motion activity in the visual scene, demonstrate the utility of focal-plane change triggered compression to surveillance vision systems.

  10. DAVID: A new video motion sensor for outdoor perimeter applications

    International Nuclear Information System (INIS)

    Alexander, J.C.

    1986-01-01

    To be effective, a perimeter intrusion detection system must comprise both sensor and rapid assessment components. The use of closed circuit television (CCTV) to provide the rapid assessment capability, makes possible the use of video motion detection (VMD) processing as a system sensor component. Despite it's conceptual appeal, video motion detection has not been widely used in outdoor perimeter systems because of an inability to discriminate between genuine intrusions and numerous environmental effects such as cloud shadows, wind motion, reflections, precipitation, etc. The result has been an unacceptably high false alarm rate and operator work-load. DAVID (Digital Automatic Video Intrusion Detector) utilizes new digital signal processing techniques to achieve a dramatic improvement in discrimination performance thereby making video motion detection practical for outdoor applications. This paper begins with a discussion of the key considerations in implementing an outdoor video intrusion detection system, followed by a description of the DAVID design in light of these considerations

  11. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    Directory of Open Access Journals (Sweden)

    Ahmad Jalal

    2014-07-01

    Full Text Available Recent advancements in depth video sensors technologies have made human activity recognition (HAR realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  12. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    Science.gov (United States)

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  13. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  14. Data Compression by Shape Compensation for Mobile Video Sensors

    Directory of Open Access Journals (Sweden)

    Ben-Shung Chow

    2009-04-01

    Full Text Available Most security systems, with their transmission bandwidth and computing power both being sufficient, emphasize their automatic recognition techniques. However, in some situations such as baby monitors and intruder avoidance by mobile sensors, the decision function sometimes can be shifted to the concerned human to reduce the transmission and computation cost. We therefore propose a binary video compression method in low resolution to achieve a low cost mobile video communication for inexpensive camera sensors. Shape compensation as proposed in this communication successfully replaces the standard Discrete Cosine Transformation (DCT after motion compensation.

  15. The Next Generation Advanced Video Guidance Sensor: Flight Heritage and Current Development

    Science.gov (United States)

    Howard, Richard T.; Bryan, Thomas C.

    2009-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) is the latest in a line of sensors that have flown four times in the last 10 years. The NGAVGS has been under development for the last two years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in "spot mode" out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. This paper presents the flight heritage and results of the sensor technology, some hardware trades for the current sensor, and discusses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, the testing of the various NGAVGS development units will be discussed along with the use of the NGAVGS as a proximity operations and docking sensor.

  16. Range-Measuring Video Sensors

    Science.gov (United States)

    Howard, Richard T.; Briscoe, Jeri M.; Corder, Eric L.; Broderick, David

    2006-01-01

    Optoelectronic sensors of a proposed type would perform the functions of both electronic cameras and triangulation- type laser range finders. That is to say, these sensors would both (1) generate ordinary video or snapshot digital images and (2) measure the distances to selected spots in the images. These sensors would be well suited to use on robots that are required to measure distances to targets in their work spaces. In addition, these sensors could be used for all the purposes for which electronic cameras have been used heretofore. The simplest sensor of this type, illustrated schematically in the upper part of the figure, would include a laser, an electronic camera (either video or snapshot), a frame-grabber/image-capturing circuit, an image-data-storage memory circuit, and an image-data processor. There would be no moving parts. The laser would be positioned at a lateral distance d to one side of the camera and would be aimed parallel to the optical axis of the camera. When the range of a target in the field of view of the camera was required, the laser would be turned on and an image of the target would be stored and preprocessed to locate the angle (a) between the optical axis and the line of sight to the centroid of the laser spot.

  17. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  18. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  19. Video-based Mobile Mapping System Using Smartphones

    Science.gov (United States)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  20. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  1. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  2. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  3. Identifying balance impairments in people with Parkinson's disease using video and wearable sensors.

    Science.gov (United States)

    Stack, Emma; Agarwal, Veena; King, Rachel; Burnett, Malcolm; Tahavori, Fatemeh; Janko, Balazs; Harwin, William; Ashburn, Ann; Kunkel, Dorit

    2018-05-01

    Falls and near falls are common among people with Parkinson's (PwP). To date, most wearable sensor research focussed on fall detection, few studies explored if wearable sensors can detect instability. Can instability (caution or near-falls) be detected using wearable sensors in comparison to video analysis? Twenty-four people (aged 60-86) with and without Parkinson's were recruited from community groups. Movements (e.g. walking, turning, transfers and reaching) were observed in the gait laboratory and/or at home; recorded using clinical measures, video and five wearable sensors (attached on the waist, ankles and wrists). After defining 'caution' and 'instability', two researchers evaluated video data and a third the raw wearable sensor data; blinded to each other's evaluations. Agreement between video and sensor data was calculated on stability, timing, step count and strategy. Data was available for 117 performances: 82 (70%) appeared stable on video. Ratings agreed in 86/117 cases (74%). Highest agreement was noted for chair transfer, timed up and go test and 3 m walks. Video analysts noted caution (slow, contained movements, safety-enhancing postures and concentration) and/or instability (saving reactions, stopping after stumbling or veering) in 40/134 performances (30%): raw wearable sensor data identified 16/35 performances rated cautious or unstable (sensitivity 46%) and 70/82 rated stable (specificity 85%). There was a 54% chance that a performance identified from wearable sensors as cautious/unstable was so; rising to 80% for stable movements. Agreement between wearable sensor and video data suggested that wearable sensors can detect subtle instability and near-falls. Caution and instability were observed in nearly a third of performances, suggesting that simple, mildly challenging actions, with clearly defined start- and end-points, may be most amenable to monitoring during free-living at home. Using the genuine near-falls recorded, work continues to

  4. Evaluation of intrusion sensors and video assessment in areas of restricted passage

    International Nuclear Information System (INIS)

    Hoover, C.E.; Ringler, C.E.

    1996-04-01

    This report discusses an evaluation of intrusion sensors and video assessment in areas of restricted passage. The discussion focuses on applications of sensors and video assessment in suspended ceilings and air ducts. It also includes current and proposed requirements for intrusion detection and assessment. Detection and nuisance alarm characteristics of selected sensors as well as assessment capabilities of low-cost board cameras were included in the evaluation

  5. Video Analysis Verification of Head Impact Events Measured by Wearable Sensors.

    Science.gov (United States)

    Cortes, Nelson; Lincoln, Andrew E; Myer, Gregory D; Hepburn, Lisa; Higgins, Michael; Putukian, Margot; Caswell, Shane V

    2017-08-01

    Wearable sensors are increasingly used to quantify the frequency and magnitude of head impact events in multiple sports. There is a paucity of evidence that verifies head impact events recorded by wearable sensors. To utilize video analysis to verify head impact events recorded by wearable sensors and describe the respective frequency and magnitude. Cohort study (diagnosis); Level of evidence, 2. Thirty male (mean age, 16.6 ± 1.2 years; mean height, 1.77 ± 0.06 m; mean weight, 73.4 ± 12.2 kg) and 35 female (mean age, 16.2 ± 1.3 years; mean height, 1.66 ± 0.05 m; mean weight, 61.2 ± 6.4 kg) players volunteered to participate in this study during the 2014 and 2015 lacrosse seasons. Participants were instrumented with GForceTracker (GFT; boys) and X-Patch sensors (girls). Simultaneous game video was recorded by a trained videographer using a single camera located at the highest midfield location. One-third of the field was framed and panned to follow the ball during games. Videographic and accelerometer data were time synchronized. Head impact counts were compared with video recordings and were deemed valid if (1) the linear acceleration was ≥20 g, (2) the player was identified on the field, (3) the player was in camera view, and (4) the head impact mechanism could be clearly identified. Descriptive statistics of peak linear acceleration (PLA) and peak rotational velocity (PRV) for all verified head impacts ≥20 g were calculated. For the boys, a total recorded 1063 impacts (2014: n = 545; 2015: n = 518) were logged by the GFT between game start and end times (mean PLA, 46 ± 31 g; mean PRV, 1093 ± 661 deg/s) during 368 player-games. Of these impacts, 690 were verified via video analysis (65%; mean PLA, 48 ± 34 g; mean PRV, 1242 ± 617 deg/s). The X-Patch sensors, worn by the girls, recorded a total 180 impacts during the course of the games, and 58 (2014: n = 33; 2015: n = 25) were verified via video analysis (32%; mean PLA, 39 ± 21 g; mean PRV, 1664

  6. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  7. A sensor and video based ontology for activity recognition in smart environments.

    Science.gov (United States)

    Mitchell, D; Morrow, Philip J; Nugent, Chris D

    2014-01-01

    Activity recognition is used in a wide range of applications including healthcare and security. In a smart environment activity recognition can be used to monitor and support the activities of a user. There have been a range of methods used in activity recognition including sensor-based approaches, vision-based approaches and ontological approaches. This paper presents a novel approach to activity recognition in a smart home environment which combines sensor and video data through an ontological framework. The ontology describes the relationships and interactions between activities, the user, objects, sensors and video data.

  8. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  9. Disposable Multi-Sensor Unattended Ground Sensor Systems for Detecting Personnel (Systemes de detection multi-capteurs terrestres autonome destines a detecter du personnel)

    Science.gov (United States)

    2015-02-01

    the set of DCT coefficients for all the training data corresponding to the people. Then, the matrix ][ pX can be written as: ][][][ −+ −= ppp XXX ...deployed on two types of ground conditions. This included ARL multi-modal sensors, video and acoustic sensors from the Universities of Memphis and...Mississippi, SASNet from Canada, video from Night Vision Laboratory and Pearls of Wisdom system from Israel operated in conjunction with ARL personnel. This

  10. Enhanced technologies for unattended ground sensor systems

    Science.gov (United States)

    Hartup, David C.

    2010-04-01

    Progress in several technical areas is being leveraged to advantage in Unattended Ground Sensor (UGS) systems. This paper discusses advanced technologies that are appropriate for use in UGS systems. While some technologies provide evolutionary improvements, other technologies result in revolutionary performance advancements for UGS systems. Some specific technologies discussed include wireless cameras and viewers, commercial PDA-based system programmers and monitors, new materials and techniques for packaging improvements, low power cueing sensor radios, advanced long-haul terrestrial and SATCOM radios, and networked communications. Other technologies covered include advanced target detection algorithms, high pixel count cameras for license plate and facial recognition, small cameras that provide large stand-off distances, video transmissions of target activity instead of still images, sensor fusion algorithms, and control center hardware. The impact of each technology on the overall UGS system architecture is discussed, along with the advantages provided to UGS system users. Areas of analysis include required camera parameters as a function of stand-off distance for license plate and facial recognition applications, power consumption for wireless cameras and viewers, sensor fusion communication requirements, and requirements to practically implement video transmission through UGS systems. Examples of devices that have already been fielded using technology from several of these areas are given.

  11. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    Science.gov (United States)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal

  12. Portable digital video surveillance system for monitoring flower-visiting bumblebees

    Directory of Open Access Journals (Sweden)

    Thorsdatter Orvedal Aase, Anne Lene

    2011-08-01

    Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.

  13. Multimodal surveillance sensors, algorithms, and systems

    CERN Document Server

    Zhu, Zhigang

    2007-01-01

    From front-end sensors to systems and environmental issues, this practical resource guides you through the many facets of multimodal surveillance. The book examines thermal, vibration, video, and audio sensors in a broad context of civilian and military applications. This cutting-edge volume provides an in-depth treatment of data fusion algorithms that takes you to the core of multimodal surveillance, biometrics, and sentient computing. The book discusses such people and activity topics as tracking people and vehicles and identifying individuals by their speech.Systems designers benefit from d

  14. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  15. Simultaneous recordings of ocular microtremor and microsaccades with a piezoelectric sensor and a video-oculography system

    Directory of Open Access Journals (Sweden)

    Michael B. McCamy

    2013-02-01

    Full Text Available Our eyes are in continuous motion. Even when we attempt to fix our gaze, we produce so called “fixational eye movements”, which include microsaccades, drift, and ocular microtremor (OMT. Microsaccades, the largest and fastest type of fixational eye movement, shift the retinal image from several dozen to several hundred photoreceptors and have equivalent physical characteristics to saccades, only on a smaller scale (Martinez-Conde, Otero-Millan & Macknik, 2013. OMT occurs simultaneously with drift and is the smallest of the fixational eye movements (∼1 photoreceptor width, >0.5 arcmin, with dominant frequencies ranging from 70 Hz to 103 Hz (Martinez-Conde, Macknik & Hubel, 2004. Due to OMT’s small amplitude and high frequency, the most accurate and stringent way to record it is the piezoelectric transduction method. Thus, OMT studies are far rarer than those focusing on microsaccades or drift. Here we conducted simultaneous recordings of OMT and microsaccades with a piezoelectric device and a commercial infrared video tracking system. We set out to determine whether OMT could help to restore perceptually faded targets during attempted fixation, and we also wondered whether the piezoelectric sensor could affect the characteristics of microsaccades. Our results showed that microsaccades, but not OMT, counteracted perceptual fading. We moreover found that the piezoelectric sensor affected microsaccades in a complex way, and that the oculomotor system adjusted to the stress brought on by the sensor by adjusting the magnitudes of microsaccades.

  16. Compression of Video Tracking and Bandwidth Balancing Routing in Wireless Multimedia Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2015-12-01

    Full Text Available There has been a tremendous growth in multimedia applications over wireless networks. Wireless Multimedia Sensor Networks(WMSNs have become the premier choice in many research communities and industry. Many state-of-art applications, such as surveillance, traffic monitoring, and remote heath care are essentially video tracking and transmission in WMSNs. The transmission speed is constrained by the big file size of video data and fixed bandwidth allocation in constant routing paths. In this paper, we present a CamShift based algorithm to compress the tracking of videos. Then we propose a bandwidth balancing strategy in which each sensor node is able to dynamically select the node for the next hop with the highest potential bandwidth capacity to resume communication. Key to this strategy is that each node merely maintains two parameters that contain its historical bandwidth varying trend and then predict its near future bandwidth capacity. Then, the forwarding node selects the next hop with the highest potential bandwidth capacity. Simulations demonstrate that our approach significantly increases the data received by the sink node and decreases the delay on video transmission in Wireless Multimedia Sensor Network environments.

  17. Automatic Traffic Data Collection under Varying Lighting and Temperature Conditions in Multimodal Environments: Thermal versus Visible Spectrum Video-Based Systems

    Directory of Open Access Journals (Sweden)

    Ting Fu

    2017-01-01

    Full Text Available Vision-based monitoring systems using visible spectrum (regular video cameras can complement or substitute conventional sensors and provide rich positional and classification data. Although new camera technologies, including thermal video sensors, may improve the performance of digital video-based sensors, their performance under various conditions has rarely been evaluated at multimodal facilities. The purpose of this research is to integrate existing computer vision methods for automated data collection and evaluate the detection, classification, and speed measurement performance of thermal video sensors under varying lighting and temperature conditions. Thermal and regular video data was collected simultaneously under different conditions across multiple sites. Although the regular video sensor narrowly outperformed the thermal sensor during daytime, the performance of the thermal sensor is significantly better for low visibility and shadow conditions, particularly for pedestrians and cyclists. Retraining the algorithm on thermal data yielded an improvement in the global accuracy of 48%. Thermal speed measurements were consistently more accurate than for the regular video at daytime and nighttime. Thermal video is insensitive to lighting interference and pavement temperature, solves issues associated with visible light cameras for traffic data collection, and offers other benefits such as privacy, insensitivity to glare, storage space, and lower processing requirements.

  18. Evaluation of video transmission of MAC protocols in wireless sensor network

    Science.gov (United States)

    Maulidin, Mahmuddin, M.; Kamaruddin, L. M.; Elsaikh, Mohamed

    2016-08-01

    Wireless Sensor Network (WSN) is a wireless network which consists of sensor nodes scattered in a particular area which are used to monitor physical or environment condition. Each node in WSN is also scattered in sensor field, so an appropriate scheme of MAC protocol should have to develop communication link for data transferring. Video transmission is one of the important applications for the future that can be transmitted with low aspect in side of cost and also power consumption. In this paper, comparison of five different MAC WSN protocol for video transmission namely IEEE 802.11 standard, IEEE 802.15.4 standard, CSMA/CA, Berkeley-MAC, and Lightweight-MAC protocol are studied. Simulation experiment has been conducted in OMNeT++ with INET network simulator software to evaluate the performance. Obtained results indicate that IEEE 802.11 works better than other protocol in term of packet delivery, throughput, and latency.

  19. Monitoring of Structures and Mechanical Systems Using Virtual Visual Sensors for Video Analysis: Fundamental Concept and Proof of Feasibility

    Directory of Open Access Journals (Sweden)

    Thomas Schumacher

    2013-12-01

    Full Text Available Structural health monitoring (SHM has become a viable tool to provide owners of structures and mechanical systems with quantitative and objective data for maintenance and repair. Traditionally, discrete contact sensors such as strain gages or accelerometers have been used for SHM. However, distributed remote sensors could be advantageous since they don’t require cabling and can cover an area rather than a limited number of discrete points. Along this line we propose a novel monitoring methodology based on video analysis. By employing commercially available digital cameras combined with efficient signal processing methods we can measure and compute the fundamental frequency of vibration of structural systems. The basic concept is that small changes in the intensity value of a monitored pixel with fixed coordinates caused by the vibration of structures can be captured by employing techniques such as the Fast Fourier Transform (FFT. In this paper we introduce the basic concept and mathematical theory of this proposed so-called virtual visual sensor (VVS, we present a set of initial laboratory experiments to demonstrate the accuracy of this approach, and provide a practical in-service monitoring example of an in-service bridge. Finally, we discuss further work to improve the current methodology.

  20. Sensor network based vehicle classification and license plate identification system

    Energy Technology Data Exchange (ETDEWEB)

    Frigo, Janette Rose [Los Alamos National Laboratory; Brennan, Sean M [Los Alamos National Laboratory; Rosten, Edward J [Los Alamos National Laboratory; Raby, Eric Y [Los Alamos National Laboratory; Kulathumani, Vinod K [WEST VIRGINIA UNIV.

    2009-01-01

    Typically, for energy efficiency and scalability purposes, sensor networks have been used in the context of environmental and traffic monitoring applications in which operations at the sensor level are not computationally intensive. But increasingly, sensor network applications require data and compute intensive sensors such video cameras and microphones. In this paper, we describe the design and implementation of two such systems: a vehicle classifier based on acoustic signals and a license plate identification system using a camera. The systems are implemented in an energy-efficient manner to the extent possible using commercially available hardware, the Mica motes and the Stargate platform. Our experience in designing these systems leads us to consider an alternate more flexible, modular, low-power mote architecture that uses a combination of FPGAs, specialized embedded processing units and sensor data acquisition systems.

  1. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    Science.gov (United States)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  2. A review of video security training and assessment-systems and their applications

    International Nuclear Information System (INIS)

    Cellucci, J.; Hall, R.J.

    1991-01-01

    This paper reports that during the last 10 years computer-aided video data collection and playback systems have been used as nuclear facility security training and assessment tools with varying degrees of success. These mobile systems have been used by trained security personnel for response force training, vulnerability assessment, force-on-force exercises and crisis management. Typically, synchronous recordings from multiple video cameras, communications audio, and digital sensor inputs; are played back to the exercise participants and then edited for training and briefing. Factors that have influence user acceptance include: frequency of use, the demands placed on security personnel, fear of punishment, user training requirements and equipment cost. The introduction of S-VHS video and new software for scenario planning, video editing and data reduction; should bring about a wider range of security applications and supply the opportunity for significant cost sharing with other user groups

  3. Detection of Upscale-Crop and Partial Manipulation in Surveillance Video Based on Sensor Pattern Noise

    Science.gov (United States)

    Hyun, Dai-Kyung; Ryu, Seung-Jin; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-01-01

    In many court cases, surveillance videos are used as significant court evidence. As these surveillance videos can easily be forged, it may cause serious social issues, such as convicting an innocent person. Nevertheless, there is little research being done on forgery of surveillance videos. This paper proposes a forensic technique to detect forgeries of surveillance video based on sensor pattern noise (SPN). We exploit the scaling invariance of the minimum average correlation energy Mellin radial harmonic (MACE-MRH) correlation filter to reliably unveil traces of upscaling in videos. By excluding the high-frequency components of the investigated video and adaptively choosing the size of the local search window, the proposed method effectively localizes partially manipulated regions. Empirical evidence from a large database of test videos, including RGB (Red, Green, Blue)/infrared video, dynamic-/static-scene video and compressed video, indicates the superior performance of the proposed method. PMID:24051524

  4. 4K x 2K pixel color video pickup system

    Science.gov (United States)

    Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou

    1998-12-01

    This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.

  5. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    Science.gov (United States)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported

  6. A video Hartmann wavefront diagnostic that incorporates a monolithic microlens array

    International Nuclear Information System (INIS)

    Toeppen, J.S.; Bliss, E.S.; Long, T.W.; Salmon, J.T.

    1991-07-01

    we have developed a video Hartmann wavefront sensor that incorporates a monolithic array of microlenses as the focusing elements. The sensor uses a monolithic array of photofabricated lenslets. Combined with a video processor, this system reveals local gradients of the wavefront at a video frame rate of 30 Hz. Higher bandwidth is easily attainable with a camera and video processor that have faster frame rates. When used with a temporal filter, the reconstructed wavefront error is less than 1/10th wave

  7. Candid camera : video surveillance system can help protect assets

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2009-11-15

    By combining closed-circuit cameras with sophisticated video analytics to create video sensors for use in remote areas, Calgary-based IntelliView Technologies Inc.'s explosion-proof video surveillance system can help the oil and gas sector monitor its assets. This article discussed the benefits, features, and applications of IntelliView's technology. Some of the benefits include a reduced need for on-site security and operating personnel and its patented analytics product known as the SmrtDVR, where the camera's images are stored. The technology can be used in temperatures as cold as minus 50 degrees Celsius and as high as 50 degrees Celsius. The product was commercialized in 2006 when it was used by Nexen Inc. It was concluded that false alarms set off by natural occurrences such as rain, snow, glare and shadows were a huge problem with analytics in the past, but that problem has been solved by IntelliView, which has its own source code, and re-programmed code. 1 fig.

  8. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Science.gov (United States)

    Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua

    2014-01-01

    To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252

  9. Proximity Operations and Docking Sensor Development

    Science.gov (United States)

    Howard, Richard T.; Bryan, Thomas C.; Brewster, Linda L.; Lee, James E.

    2009-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) has been under development for the last three years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in spot mode out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. 12 Parts obsolescence issues prevent the construction of more AVGS units, and the next generation sensor was updated to allow it to support the CEV and COTS programs. The flight proven AR&D sensor has been redesigned to update parts and add additional capabilities for CEV and COTS with the development of the Next Generation AVGS at the Marshall Space Flight Center. The obsolete imager and processor are being replaced with new radiation tolerant parts. In addition, new capabilities include greater sensor range, auto ranging capability, and real-time video output. This paper presents some sensor hardware trades, use of highly integrated laser components, and addresses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements

  10. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors.

    Science.gov (United States)

    Belkacem, Abdelkader Nasreddine; Saetia, Supat; Zintus-art, Kalanyu; Shin, Duk; Kambara, Hiroyuki; Yoshimura, Natsue; Berrached, Nasreddine; Koike, Yasuharu

    2015-01-01

    EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.

  11. Unattended video surveillance systems for international safeguards

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1979-01-01

    The use of unattended video surveillance systems places some unique requirements on the systems and their hardware. The systems have the traditional requirements of video imaging, video storage, and video playback but also have some special requirements such as tamper safing. The technology available to meet these requirements and how it is being applied to unattended video surveillance systems are discussed in this paper

  12. Measurement and protocol for evaluating video and still stabilization systems

    Science.gov (United States)

    Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément

    2013-01-01

    This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.

  13. A wireless sensor network-based portable vehicle detector evaluation system.

    Science.gov (United States)

    Yoo, Seong-eun

    2013-01-17

    In an upcoming smart transportation environment, performance evaluations of existing Vehicle Detection Systems are crucial to maintain their accuracy. The existing evaluation method for Vehicle Detection Systems is based on a wired Vehicle Detection System reference and a video recorder, which must be operated and analyzed by capable traffic experts. However, this conventional evaluation system has many disadvantages. It is inconvenient to deploy, the evaluation takes a long time, and it lacks scalability and objectivity. To improve the evaluation procedure, this paper proposes a Portable Vehicle Detector Evaluation System based on wireless sensor networks. We describe both the architecture and design of a Vehicle Detector Evaluation System and the implementation results, focusing on the wireless sensor networks and methods for traffic information measurement. With the help of wireless sensor networks and automated analysis, our Vehicle Detector Evaluation System can evaluate a Vehicle Detection System conveniently and objectively. The extensive evaluations of our Vehicle Detector Evaluation System show that it can measure the traffic information such as volume counts and speed with over 98% accuracy.

  14. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    Directory of Open Access Journals (Sweden)

    Ming Xue

    2014-02-01

    Full Text Available To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks.

  15. BABY MONITORING SYSTEM USING WIRELESS SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    G. Rajesh

    2014-09-01

    Full Text Available Sudden Infant Death Syndrome (SIDS is marked by the sudden death of an infant during sleep that is not predicted by the medical history and remains unexplained even after thorough forensic autopsy and detailed death investigation. In this we developed a system that provides solutions for the above problems by making the crib smart using the wireless sensor networks (WSN and smart phones. The system provides visual monitoring service through live video, alert services by crib fencing and awakens alert, monitoring services by temperature reading and light intensity reading, vaccine reminder and weight monitoring.

  16. Video systems for alarm assessment

    International Nuclear Information System (INIS)

    Greenwoll, D.A.; Matter, J.C.; Ebel, P.E.

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs

  17. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  18. The live service of video geo-information

    Science.gov (United States)

    Xue, Wu; Zhang, Yongsheng; Yu, Ying; Zhao, Ling

    2016-03-01

    In disaster rescue, emergency response and other occasions, traditional aerial photogrammetry is difficult to meet real-time monitoring and dynamic tracking demands. To achieve the live service of video geo-information, a system is designed and realized—an unmanned helicopter equipped with video sensor, POS, and high-band radio. This paper briefly introduced the concept and design of the system. The workflow of video geo-information live service is listed. Related experiments and some products are shown. In the end, the conclusion and outlook is given.

  19. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  20. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    Science.gov (United States)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Convenciones Ecológicas y Medioambientales (CIECEM, University of Huelva), in the environment of Doñana Natural Park (Huelva province). In this way, both stations, which are separated by a distance of 75 km, will work as a double video station system in order to provide trajectory and orbit information of mayor bolides and, thus, increase the chance of meteorite recovery in the Iberian Peninsula. The new diurnal SPMN video stations are endowed with different models of Mintron cameras (Mintron Enterprise Co., LTD). These are high-sensitivity devices that employ a colour 1/2" Sony interline transfer CCD image sensor. Aspherical lenses are attached to the video cameras in order to maximize image quality. However, the use of fast lenses is not a priority here: while most of our nocturnal cameras use f0.8 or f1.0 lenses in order to detect meteors as faint as magnitude +3, diurnal systems employ in most cases f1.4 to f2.0 lenses. Their focal length ranges from 3.8 to 12 mm to cover different atmospheric volumes. The cameras are arranged in such a way that the whole sky is monitored from every observing station. Figure 1. A daylight event recorded from Sevilla on May 26, 2008 at 4h30m05.4 +-0.1s UT. The way our diurnal video cameras work is similar to the operation of our nocturnal systems [1]. Thus, diurnal stations are automatically switched on and off at sunrise and sunset, respectively. The images taken at 25 fps and with a resolution of 720x576 pixels are continuously sent to PC computers through a video capture device. The computers run a software (UFOCapture, by SonotaCo, Japan) that automatically registers meteor trails and stores the corresponding video frames on hard disk. Besides, before the signal from the cameras reaches the computers, a video time inserter that employs a GPS device (KIWI-OSD, by PFD Systems) inserts time information on every video frame. This allows us to measure time in a precise way (about 0.01 sec.) along the whole fireball path. EPSC Abstracts

  1. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors

    Directory of Open Access Journals (Sweden)

    Abdelkader Nasreddine Belkacem

    2015-01-01

    Full Text Available EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.

  2. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  3. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  4. Internetting tactical security sensor systems

    Science.gov (United States)

    Gage, Douglas W.; Bryan, W. D.; Nguyen, Hoa G.

    1998-08-01

    The Multipurpose Surveillance and Security Mission Platform (MSSMP) is a distributed network of remote sensing packages and control stations, designed to provide a rapidly deployable, extended-range surveillance capability for a wide variety of military security operations and other tactical missions. The baseline MSSMP sensor suite consists of a pan/tilt unit with video and FLIR cameras and laser rangefinder. With an additional radio transceiver, MSSMP can also function as a gateway between existing security/surveillance sensor systems such as TASS, TRSS, and IREMBASS, and IP-based networks, to support the timely distribution of both threat detection and threat assessment information. The MSSMP system makes maximum use of Commercial Off The Shelf (COTS) components for sensing, processing, and communications, and of both established and emerging standard communications networking protocols and system integration techniques. Its use of IP-based protocols allows it to freely interoperate with the Internet -- providing geographic transparency, facilitating development, and allowing fully distributed demonstration capability -- and prepares it for integration with the IP-based tactical radio networks that will evolve in the next decade. Unfortunately, the Internet's standard Transport layer protocol, TCP, is poorly matched to the requirements of security sensors and other quasi- autonomous systems in being oriented to conveying a continuous data stream, rather than discrete messages. Also, its canonical 'socket' interface both conceals short losses of communications connectivity and simply gives up and forces the Application layer software to deal with longer losses. For MSSMP, a software applique is being developed that will run on top of User Datagram Protocol (UDP) to provide a reliable message-based Transport service. In addition, a Session layer protocol is being developed to support the effective transfer of control of multiple platforms among multiple control

  5. Cobra: A content-based video retrieval system

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, W.; Jensen, C.S.; Jeffery, K.G.; Pokorny, J.; Saltenis, S.; Bertino, E.; Böhm, K.; Jarke, M.

    2002-01-01

    An increasing number of large publicly available video libraries results in a demand for techniques that can manipulate the video data based on content. In this paper, we present a content-based video retrieval system called Cobra. The system supports automatic extraction and retrieval of high-level

  6. Video performance for high security applications

    International Nuclear Information System (INIS)

    Connell, Jack C.; Norman, Bradley C.

    2010-01-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivity of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.

  7. Optimal resource allocation for distributed video communication

    CERN Document Server

    He, Yifeng

    2013-01-01

    While most books on the subject focus on resource allocation in just one type of network, this book is the first to examine the common characteristics of multiple distributed video communication systems. Comprehensive and systematic, Optimal Resource Allocation for Distributed Video Communication presents a unified optimization framework for resource allocation across these systems. The book examines the techniques required for optimal resource allocation over Internet, wireless cellular networks, wireless ad hoc networks, and wireless sensor networks. It provides you with the required foundat

  8. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  9. 78 FR 11988 - Open Video Systems

    Science.gov (United States)

    2013-02-21

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 76 [CS Docket No. 96-46, FCC 96-334] Open Video Systems AGENCY: Federal Communications Commission. ACTION: Final rule; announcement of effective date... 43160, August 21, 1996. The final rules modified rules and policies concerning Open Video Systems. DATES...

  10. Sensors Applications, Volume 4, Sensors for Automotive Applications

    Science.gov (United States)

    Marek, Jiri; Trah, Hans-Peter; Suzuki, Yasutoshi; Yokomori, Iwao

    2003-07-01

    An international team of experts from the leading companies in this field gives a detailed picture of existing as well as future applications. They discuss in detail current technologies, design and construction concepts, market considerations and commercial developments. Topics covered include vehicle safety, fuel consumption, air conditioning, emergency control, traffic control systems, and electronic guidance using radar and video. Meeting the growing need for comprehensive information on the capabilities, potentials and limitations of modern sensor systems, Sensors Applications is a book series covering the use of sophisticated technologies and materials for the creation of advanced sensors and their implementation in the key areas process monitoring, building control, health care, automobiles, aerospace, environmental technology and household appliances.

  11. Application of Video Recognition Technology in Landslide Monitoring System

    Directory of Open Access Journals (Sweden)

    Qingjia Meng

    2018-01-01

    Full Text Available The video recognition technology is applied to the landslide emergency remote monitoring system. The trajectories of the landslide are identified by this system in this paper. The system of geological disaster monitoring is applied synthetically to realize the analysis of landslide monitoring data and the combination of video recognition technology. Landslide video monitoring system will video image information, time point, network signal strength, power supply through the 4G network transmission to the server. The data is comprehensively analysed though the remote man-machine interface to conduct to achieve the threshold or manual control to determine the front-end video surveillance system. The system is used to identify the target landslide video for intelligent identification. The algorithm is embedded in the intelligent analysis module, and the video frame is identified, detected, analysed, filtered, and morphological treatment. The algorithm based on artificial intelligence and pattern recognition is used to mark the target landslide in the video screen and confirm whether the landslide is normal. The landslide video monitoring system realizes the remote monitoring and control of the mobile side, and provides a quick and easy monitoring technology.

  12. Non Audio-Video gesture recognition system

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis

    2016-01-01

    Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...

  13. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  14. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  15. Use of a Proximity Sensor Switch for "Hands Free" Operation of Computer-Based Video Prompting by Young Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Ivey, Alexandria N.; Mechling, Linda C.; Spencer, Galen P.

    2015-01-01

    In this study, the effectiveness of a "hands free" approach for operating video prompts to complete multi-step tasks was measured. Students advanced the video prompts by using a motion (hand wave) over a proximity sensor switch. Three young adult females with a diagnosis of moderate intellectual disability participated in the study.…

  16. Dedicated data recording video system for Spacelab experiments

    Science.gov (United States)

    Fukuda, Toshiyuki; Tanaka, Shoji; Fujiwara, Shinji; Onozuka, Kuniharu

    1984-04-01

    A feasibility study of video tape recorder (VTR) modification to add the capability of data recording etc. was conducted. This system is an on-broad system to support Spacelab experiments as a dedicated video system and a dedicated data recording system to operate independently of the normal operation of the Orbiter, Spacelab and the other experiments. It continuously records the video image signals with the acquired data, status and operator's voice at the same time on one cassette video tape. Such things, the crews' actions, animals' behavior, microscopic views and melting materials in furnace, etc. are recorded. So, it is expected that experimenters can make a very easy and convenient analysis of the synchronized video, voice and data signals in their post flight analysis.

  17. Patterned Video Sensors For Low Vision

    Science.gov (United States)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  18. An integrated circuit/packet switched video conferencing system

    Energy Technology Data Exchange (ETDEWEB)

    Kippenhan Junior, H.A.; Lidinsky, W.P.; Roediger, G.A. [Fermi National Accelerator Lab., Batavia, IL (United States). HEP Network Resource Center; Waits, T.A. [Rutgers Univ., Piscataway, NJ (United States). Dept. of Physics and Astronomy

    1996-07-01

    The HEP Network Resource Center (HEPNRC) at Fermilab and the Collider Detector Facility (CDF) collaboration have evolved a flexible, cost-effective, widely accessible video conferencing system for use by high energy physics collaborations and others wishing to use video conferencing. No current systems seemed to fully meet the needs of high energy physics collaborations. However, two classes of video conferencing technology: circuit-switched and packet-switched, if integrated, might encompass most of HEPS's needs. It was also realized that, even with this integration, some additional functions were needed and some of the existing functions were not always wanted. HEPNRC with the help of members of the CDF collaboration set out to develop such an integrated system using as many existing subsystems and components as possible. This system is called VUPAC (Video conferencing Using Packets and Circuits). This paper begins with brief descriptions of the circuit-switched and packet-switched video conferencing systems. Following this, issues and limitations of these systems are considered. Next the VUPAC system is described. Integration is accomplished primarily by a circuit/packet video conferencing interface. Augmentation is centered in another subsystem called MSB (Multiport MultiSession Bridge). Finally, there is a discussion of the future work needed in the evolution of this system. (author)

  19. An integrated circuit/packet switched video conferencing system

    International Nuclear Information System (INIS)

    Kippenhan Junior, H.A.; Lidinsky, W.P.; Roediger, G.A.; Waits, T.A.

    1996-01-01

    The HEP Network Resource Center (HEPNRC) at Fermilab and the Collider Detector Facility (CDF) collaboration have evolved a flexible, cost-effective, widely accessible video conferencing system for use by high energy physics collaborations and others wishing to use video conferencing. No current systems seemed to fully meet the needs of high energy physics collaborations. However, two classes of video conferencing technology: circuit-switched and packet-switched, if integrated, might encompass most of HEPS's needs. It was also realized that, even with this integration, some additional functions were needed and some of the existing functions were not always wanted. HEPNRC with the help of members of the CDF collaboration set out to develop such an integrated system using as many existing subsystems and components as possible. This system is called VUPAC (Video conferencing Using Packets and Circuits). This paper begins with brief descriptions of the circuit-switched and packet-switched video conferencing systems. Following this, issues and limitations of these systems are considered. Next the VUPAC system is described. Integration is accomplished primarily by a circuit/packet video conferencing interface. Augmentation is centered in another subsystem called MSB (Multiport MultiSession Bridge). Finally, there is a discussion of the future work needed in the evolution of this system. (author)

  20. Implementation of nuclear material surveillance system based on the digital video capture card and counter

    International Nuclear Information System (INIS)

    Lee, Sang Yoon; Song, Dae Yong; Ko, Won Il; Ha, Jang Ho; Kim, Ho Dong

    2003-07-01

    In this paper, the implementation techniques of nuclear material surveillance system based on the digital video capture board and digital counter was described. The surveillance system that is to be developed is consist of CCD cameras, neutron monitors, and PC for data acquisition. To develop the system, the properties of the PCI based capture board and counter was investigated, and the characteristics of related SDK library was summarized. This report could be used for the developers who want to develop the surveillance system for various experimental environments based on the DVR and sensors using Borland C++ Builder

  1. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    Science.gov (United States)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  2. Affordable multisensor digital video architecture for 360° situational awareness displays

    Science.gov (United States)

    Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana

    2011-06-01

    One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.

  3. Modular integrated video system (MIVS) review station

    International Nuclear Information System (INIS)

    Garcia, M.L.

    1988-01-01

    An unattended video surveillance unit, the Modular Integrated Video System (MIVS), has been developed by Sandia National Laboratories for International Safeguards use. An important support element of this system is a semi-automatic Review Station. Four component modules, including an 8 mm video tape recorder, a 4-inch video monitor, a power supply and control electronics utilizing a liquid crystal display (LCD) are mounted in a suitcase for probability. The unit communicates through the interactive, menu-driven LCD and may be operated on facility power through the world. During surveillance, the MIVS records video information at specified time intervals, while also inserting consecutive scene numbers and tamper event information. Using either of two available modes of operation, the Review Station reads the inserted information and counts the number of missed scenes and/or tamper events encountered on the tapes, and reports this to the user on the LCD. At the end of a review session, the system will summarize the results of the review, stop the recorder, and advise the user of the completion of the review. In addition, the Review Station will check for any video loss on the tape

  4. Beat-to-beat heart rate estimation fusing multimodal video and sensor data.

    Science.gov (United States)

    Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen

    2015-08-01

    Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference.

  5. Adaptive intrusion data system

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1976-01-01

    An Adaptive Intrusion Data System (AIDS) was developed to collect data from intrusion alarm sensors as part of an evaluation system to improve sensor performance. AIDS is a unique digital data compression, storage, and formatting system. It also incorporates capability for video selection and recording for assessment of the sensors monitored by the system. The system is software reprogrammable to numerous configurations that may be utilized for the collection of environmental, bi-level, analog and video data. The output of the system is digital tapes formatted for direct data reduction on a CDC 6400 computer, and video tapes containing timed tagged information that can be correlated with the digital data

  6. Encrypted IP video communication system

    Science.gov (United States)

    Bogdan, Apetrechioaie; Luminiţa, Mateescu

    2010-11-01

    Digital video transmission is a permanent subject of development, research and improvement. This field of research has an exponentially growing market in civil, surveillance, security and military aplications. A lot of solutions: FPGA, ASIC, DSP have been used for this purpose. The paper presents the implementation of an encrypted, IP based, video communication system having a competitive performance/cost ratio .

  7. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    Science.gov (United States)

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  9. Video game training and the reward system

    OpenAIRE

    Lorenz, R.; Gleich, T.; Gallinat, J.; Kühn, S.

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors ...

  10. Smart Optoelectronic Sensors and Intelligent Sensor Systems

    Directory of Open Access Journals (Sweden)

    Sergey Y. YURISH

    2012-03-01

    Full Text Available Light-to-frequency converters are widely used in various optoelectronic sensor systems. However, a further frequency-to-digital conversion is a bottleneck in such systems due to a broad frequency range of light-to-frequency converters’ outputs. This paper describes an effective OEM design approach, which can be used for smart and intelligent sensor systems design. The design is based on novel, multifunctional integrated circuit of Universal Sensors & Transducers Interface especially designed for such sensor applications. Experimental results have confirmed an efficiency of this approach and high metrological performances.

  11. Web-based remote video monitoring system implemented using Java technology

    Science.gov (United States)

    Li, Xiaoming

    2012-04-01

    A HTTP based video transmission system has been built upon the p2p(peer to peer) network structure utilizing the Java technologies. This makes the video monitoring available to any host which has been connected to the World Wide Web in any method, including those hosts behind firewalls or in isolated sub-networking. In order to achieve this, a video source peer has been developed, together with the client video playback peer. The video source peer can respond to the video stream request in HTTP protocol. HTTP based pipe communication model is developed to speeding the transmission of video stream data, which has been encoded into fragments using the JPEG codec. To make the system feasible in conveying video streams between arbitrary peers on the web, a HTTP protocol based relay peer is implemented as well. This video monitoring system has been applied in a tele-robotic system as a visual feedback to the operator.

  12. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    Science.gov (United States)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  13. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    International Nuclear Information System (INIS)

    Lee, Inho; Oh, Jaesung; Oh, Jun-Ho; Kim, Inhyeok

    2017-01-01

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  14. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Inho [Institute for Human and Machine Cognition (IHMC), Florida (United States); Oh, Jaesung; Oh, Jun-Ho [Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of); Kim, Inhyeok [NAVER Green Factory, Seongnam (Korea, Republic of)

    2017-06-15

    This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we con- struct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor gener- ally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

  15. Video change detection for fixed wing UAVs

    Science.gov (United States)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the

  16. A data-management system using sensor technology and wireless devices for port security

    Science.gov (United States)

    Saldaña, Manuel; Rivera, Javier; Oyola, Jose; Manian, Vidya

    2014-05-01

    Sensor technologies such as infrared sensors and hyperspectral imaging, video camera surveillance are proven to be viable in port security. Drawing from sources such as infrared sensor data, digital camera images and processed hyperspectral images, this article explores the implementation of a real-time data delivery system. In an effort to improve the manner in which anomaly detection data is delivered to interested parties in port security, this system explores how a client-server architecture can provide protected access to data, reports, and device status. Sensor data and hyperspectral image data will be kept in a monitored directory, where the system will link it to existing users in the database. Since this system will render processed hyperspectral images that are dynamically added to the server - which often occupy a large amount of space - the resolution of these images is trimmed down to around 1024×768 pixels. Changes that occur in any image or data modification that originates from any sensor will trigger a message to all users that have a relation with the aforementioned. These messages will be sent to the corresponding users through automatic email generation and through a push notification using Google Cloud Messaging for Android. Moreover, this paper presents the complete architecture for data reception from the sensors, processing, storage and discusses how users of this system such as port security personnel can use benefit from the use of this service to receive secure real-time notifications if their designated sensors have detected anomalies and/or have remote access to results from processed hyperspectral imagery relevant to their assigned posts.

  17. Video semaphore decoding for free-space optical communication

    Science.gov (United States)

    Last, Matthew; Fisher, Brian; Ezekwe, Chinwuba; Hubert, Sean M.; Patel, Sheetal; Hollar, Seth; Leibowitz, Brian S.; Pister, Kristofer S. J.

    2001-04-01

    Using teal-time image processing we have demonstrated a low bit-rate free-space optical communication system at a range of more than 20km with an average optical transmission power of less than 2mW. The transmitter is an autonomous one cubic inch microprocessor-controlled sensor node with a laser diode output. The receiver is a standard CCD camera with a 1-inch aperture lens, and both hardware and software implementations of the video semaphore decoding algorithm. With this system sensor data can be reliably transmitted 21 km form San Francisco to Berkeley.

  18. Towards Sensor Database Systems

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Gehrke, Johannes; Seshadri, Praveen

    2001-01-01

    . These systems lack flexibility because data is extracted in a predefined way; also, they do not scale to a large number of devices because large volumes of raw data are transferred regardless of the queries that are submitted. In our new concept of sensor database system, queries dictate which data is extracted...... from the sensors. In this paper, we define the concept of sensor databases mixing stored data represented as relations and sensor data represented as time series. Each long-running query formulated over a sensor database defines a persistent view, which is maintained during a given time interval. We...... also describe the design and implementation of the COUGAR sensor database system....

  19. Noise aliasing in interline-video-based fluoroscopy systems

    International Nuclear Information System (INIS)

    Lai, H.; Cunningham, I.A.

    2002-01-01

    Video-based imaging systems for continuous (nonpulsed) x-ray fluoroscopy use a variety of video formats. Conventional video-camera systems may operate in either interlaced or progressive-scan modes, and CCD systems may operate in interline- or frame-transfer modes. A theoretical model of the image noise power spectrum corresponding to these formats is described. It is shown that with respect to frame-transfer or progressive-readout modes, interline or interlaced cameras operating in a frame-integration mode will result in a spectral shift of 25% of the total image noise power from low spatial frequencies to high. In a field-integration mode, noise power is doubled with most of the increase occurring at high spatial frequencies. The differences are due primarily to the effect of noise aliasing. In interline or interlaced formats, alternate lines are obtained with each video field resulting in a vertical sampling frequency for noise that is one half of the physical sampling frequency. The extent of noise aliasing is modified by differences in the statistical correlations between video fields in the different modes. The theoretical model is validated with experiments using an x-ray image intensifier and CCD-camera system. It is shown that different video modes affect the shape of the noise-power spectrum and therefore the detective quantum efficiency. While the effect on observer performance is not addressed, it is concluded that in order to minimize image noise at the critical mid-to-high spatial frequencies for a specified x-ray exposure, fluoroscopic systems should use only frame-transfer (CCD camera) or progressive-scan (conventional video) formats

  20. Interactive Videos Enhance Learning about Socio-Ecological Systems

    Science.gov (United States)

    Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean

    2018-01-01

    Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…

  1. Video game training and the reward system.

    Science.gov (United States)

    Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  2. Flight route Designing and mission planning Of power line inspecting system Based On multi-sensor UAV

    International Nuclear Information System (INIS)

    Xiaowei, Xie; Zhengjun, Liu; Zhiquan, Zuo

    2014-01-01

    In order to obtain various information of power facilities such as spatial location, geometry, images data and video information in the infrared and ultraviolet band and so on, Unmanned Aerial Vehicle (UAV) power line inspecting system needs to integrate a variety of sensors for data collection. Low altitude and side-looking imaging are required for UAV flight to ensure sensors to acquire high-quality data and device security. In this paper, UAV power line inspecting system is deferent from existing ones that used in Surveying and Mapping. According to characteristics of UAV for example equipped multiple sensor, side-looking imaging, working at low altitude, complex terrain conditions and corridor type flight, this paper puts forward a UAV power line inspecting scheme which comprehensively considered of the UAV performance, sensor parameters and task requirements. The scheme is finally tested in a region of Guangdong province, and the preliminary results show that the scheme is feasible

  3. Feasibility of an ingestible sensor-based system for monitoring adherence to tuberculosis therapy.

    Directory of Open Access Journals (Sweden)

    Robert Belknap

    Full Text Available Poor adherence to tuberculosis (TB treatment hinders the individual's recovery and threatens public health. Currently, directly observed therapy (DOT is the standard of care; however, high sustaining costs limit its availability, creating a need for more practical adherence confirmation methods. Techniques such as video monitoring and devices to time-register the opening of pill bottles are unable to confirm actual medication ingestions. A novel approach developed by Proteus Digital Health, Inc. consists of an ingestible sensor and an on-body wearable sensor; together, they electronically confirm unique ingestions and record the date/time of the ingestion. A feasibility study using an early prototype was conducted in active TB patients to determine the system's accuracy and safety in confirming co-ingestion of TB medications with sensors. Thirty patients completed 10 DOT visits and 1,080 co-ingestion events; the system showed 95.0% (95% CI 93.5-96.2% positive detection accuracy, defined as the number of detected sensors divided by the number of transmission capable sensors administered. The specificity was 99.7% [95% CI 99.2-99.9%] based on three false signals recorded by receivers. The system's identification accuracy, defined as the number of correctly identified ingestible sensors divided by the number of sensors detected, was 100%. Of 11 adverse events, four were deemed related or possibly related to the device; three mild skin rashes and one complaint of nausea. The system's positive detection accuracy was not affected by the subjects' Body Mass Index (p = 0.7309. Study results suggest the system is capable of correctly identifying ingestible sensors with high accuracy, poses a low risk to users, and may have high patient acceptance. The system has the potential to confirm medication specific treatment compliance on a dose-by-dose basis. When coupled with mobile technology, the system could allow wirelessly observed therapy (WOT for

  4. Real-time high-level video understanding using data warehouse

    Science.gov (United States)

    Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois

    2006-02-01

    High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.

  5. Quality and noise measurements in mobile phone video capture

    Science.gov (United States)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  6. Staff acceptance of video monitoring for coordination: a video system to support perioperative situation awareness.

    Science.gov (United States)

    Kim, Young Ju; Xiao, Yan; Hu, Peter; Dutton, Richard

    2009-08-01

    To understand staff acceptance of a remote video monitoring system for operating room (OR) coordination. Improved real-time remote visual access to OR may enhance situational awareness but also raises privacy concerns for patients and staff. Survey. A system was implemented in a six-room surgical suite to display OR monitoring video at an access restricted control desk area. Image quality was manipulated to improve staff acceptance. Two months after installation, interviews and a survey were conducted on staff acceptance of video monitoring. About half of all OR personnel responded (n = 63). Overall levels of concerns were low, with 53% rated no concerns and 42% little concern. Top two reported uses of the video were to see if cases are finished and to see if a room is ready. Viewing the video monitoring system as useful did not reduce levels of concern. Staff in supervisory positions perceived less concern about the system's impact on privacy than did those supervised (p < 0.03). Concerns for patient privacy correlated with concerns for staff privacy and performance monitoring. Technical means such as manipulating image quality helped staff acceptance. Manipulation of image quality resulted overall acceptance of monitoring video, with residual levels of concerns. OR nurses may express staff privacy concern in the form of concerns over patient privacy. This study provided suggestions for technological and implementation strategies of video monitoring for coordination use in OR. Deployment of communication technology and integration of clinical information will likely raise concerns over staff privacy and performance monitoring. The potential gain of increased information access may be offset by negative impact of a sense of loss of autonomy.

  7. New Management Tools – From Video Management Systems to Business Decision Systems

    Directory of Open Access Journals (Sweden)

    Emilian Cristian IRIMESCU

    2015-06-01

    Full Text Available In the last decades management was characterized by the increased use of Business Decision Systems, also called Decision Support Systems. More than that, systems that were until now used in a traditional way, for some simple activities (like security, migrated to the decision area of management. Some examples are the Video Management Systems from the physical security activity. This article will underline the way Video Management Systems passed to Business Decision Systems, which are the advantages of use thereof and which are the trends in this industry. The article will also analyze if at this moment Video Management Systems are real Business Decision Systems or if there are some functions missing to rank them at this level.

  8. Multi-modal Video Surveillance Aided by Pyroelectric Infrared Sensors

    OpenAIRE

    Magno , Michele; Tombari , Federico; Brunelli , Davide; Di Stefano , Luigi; Benini , Luca

    2008-01-01

    International audience; The interest in low-cost and small size video surveillance systems able to collaborate in a network has been increasing over the last years. Thanks to the progress in low-power design, research has greatly reduced the size and the power consumption of such distributed embedded systems providing flexibility, quick deployment and allowing the implementation of effective vision algorithms performing image processing directly on the embedded node. In this paper we present ...

  9. Unattended digital video surveillance: A system prototype for EURATOM safeguards

    International Nuclear Information System (INIS)

    Chare, P.; Goerten, J.; Wagner, H.; Rodriguez, C.; Brown, J.E.

    1994-01-01

    Ever increasing capabilities in video and computer technology have changed the face of video surveillance. From yesterday's film and analog video tape-based systems, we now emerge into the digital era with surveillance systems capable of digital image processing, image analysis, decision control logic, and random data access features -- all of which provide greater versatility with the potential for increased effectiveness in video surveillance. Digital systems also offer other advantages such as the ability to ''compress'' data, providing increased storage capacities and the potential for allowing longer surveillance Periods. Remote surveillance and system to system communications are also a benefit that can be derived from digital surveillance systems. All of these features are extremely important in today's climate Of increasing safeguards activity and decreasing budgets -- Los Alamos National Laboratory's Safeguards Systems Group and the EURATOM Safeguards Directorate have teamed to design and implement a period surveillance system that will take advantage of the versatility of digital video for facility surveillance system that will take advantage of the versatility of digital video for facility surveillance and data review. In this Paper we will familiarize you with system components and features and report on progress in developmental areas such as image compression and region of interest processing

  10. Video Game Training and the Reward System

    Directory of Open Access Journals (Sweden)

    Robert C. Lorenz

    2015-02-01

    Full Text Available Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual towards playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training.Fifty healthy participants were randomly assigned to a video game training (TG or control group (CG. Before and after training/control period, functional magnetic resonance imaging (fMRI was conducted using a non-video game related reward task.At pretest, both groups showed strongest activation in ventral striatum (VS during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated.This longitudinal study revealed that video game training may preserve reward responsiveness in the ventral striatum in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  11. Video game training and the reward system

    Science.gov (United States)

    Lorenz, Robert C.; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training. PMID:25698962

  12. Energy Systems Integration Facility Videos | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid

  13. A model for measurement of noise in CCD digital-video cameras

    International Nuclear Information System (INIS)

    Irie, K; Woodhead, I M; McKinnon, A E; Unsworth, K

    2008-01-01

    This study presents a comprehensive measurement of CCD digital-video camera noise. Knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor. The robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise. The individual noise sources present in CCD sensors are well understood, but there has been little literature on the development of a complete noise model for CCD digital-video cameras, incorporating the effects of quantization and demosaicing

  14. System design description for the LDUA common video end effector system (CVEE)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The Common Video End Effector System (CVEE), system 62-60, was designed by the Idaho National Engineering Laboratory (INEL) to provide the control interface of the various video end effectors used on the LDUA. The CVEE system consists of a Support Chassis which contains the input and output Opto-22 modules, relays, and power supplies and the Power Chassis which contains the bipolar supply and other power supplies. The combination of the Support Chassis and the Power Chassis make up the CVEE system. The CVEE system is rack mounted in the At Tank Instrument Enclosure (ATIE). Once connected it is controlled using the LDUA supervisory data acquisition system (SDAS). Video and control status will be displayed on monitors within the LDUA control center

  15. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  16. Effect Through Broadcasting System Access Point For Video Transmission

    Directory of Open Access Journals (Sweden)

    Leni Marlina

    2015-08-01

    Full Text Available Most universities are already implementing wired and wireless network that is used to access integrated information systems and the Internet. At present it is important to do research on the influence of the broadcasting system through the access point for video transmitter learning in the university area. At every university computer network through the access point must also use the cable in its implementation. These networks require cables that will connect and transmit data from one computer to another computer. While wireless networks of computers connected through radio waves. This research will be a test or assessment of how the influence of the network using the WLAN access point for video broadcasting means learning from the server to the client. Instructional video broadcasting from the server to the client via the access point will be used for video broadcasting means of learning. This study aims to understand how to build a wireless network by using an access point. It also builds a computer server as instructional videos supporting software that can be used for video server that will be emitted by broadcasting via the access point and establish a system of transmitting video from the server to the client via the access point.

  17. FPGA Implementation of Video Transmission System Based on LTE

    Directory of Open Access Journals (Sweden)

    Lu Yan

    2015-01-01

    Full Text Available In order to support high-definition video transmission, an implementation of video transmission system based on Long Term Evolution is designed. This system is developed on Xilinx Virtex-6 FPGA ML605 Evaluation Board. The paper elaborates the features of baseband link designed in Xilinx ISE and protocol stack designed in Xilinx SDK, and introduces the process of setting up hardware and software platform in Xilinx XPS. According to test, this system consumes less hardware resource and is able to transmit bidirectional video clearly and stably.

  18. A content-based news video retrieval system: NVRS

    Science.gov (United States)

    Liu, Huayong; He, Tingting

    2009-10-01

    This paper focus on TV news programs and design a content-based news video browsing and retrieval system, NVRS, which is convenient for users to fast browsing and retrieving news video by different categories such as political, finance, amusement, etc. Combining audiovisual features and caption text information, the system automatically segments a complete news program into separate news stories. NVRS supports keyword-based news story retrieval, category-based news story browsing and generates key-frame-based video abstract for each story. Experiments show that the method of story segmentation is effective and the retrieval is also efficient.

  19. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  20. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  1. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    Directory of Open Access Journals (Sweden)

    Riad I. Hammoud

    2014-10-01

    Full Text Available We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA and multi-media indexing and explorer (MINER. VIVA utilizes analyst call-outs (ACOs in the form of chat messages (voice-to-text to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1 a fusion of graphical track and text data using probabilistic methods; (2 an activity pattern learning framework to support querying an index of activities of interest (AOIs and targets of interest (TOIs by movement type and geolocation; and (3 a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV. VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  2. Automatic association of chats and video tracks for activity learning and recognition in aerial video surveillance.

    Science.gov (United States)

    Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao

    2014-10-22

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  3. Activity-based exploitation of Full Motion Video (FMV)

    Science.gov (United States)

    Kant, Shashi

    2012-06-01

    Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.

  4. Web Based Room Monitoring System Using Webcam

    Directory of Open Access Journals (Sweden)

    Tole Sutikno

    2008-04-01

    Full Text Available A security has become very important along with the increasing number of crime cases. If some security system fails, there is a need for a mechanism that capable in recording the criminal act. Therefore, it can be used for investigation purpose of the authorities. The objective of this research is to develop a security system using video streaming that able to monitor in real-time manner, display movies in a browser, and record a video as triggered by a sensor. This monitoring system comprises of two security level camera as a video recorder of special events based on infrared sensor that is connected to a microcontroller via serial communication and camera as a real-time room monitor. The hardware system consists of infrared sensor circuit to detect special events that is serially communicated to an AT89S51 microcontroller that controls the system to perform recording process, and the software system consists of a server that displaying video streaming in a webpage and a video recorder. The software for video recording and server camera uses Visual Basic 6.0 and for video streaming uses PHP 5.1.6. As the result, the system can be used to record special events that it is wanted, and can displayed video streaming in a webpage using LAN infrastructure.

  5. Embedded sensor systems

    CERN Document Server

    Agrawal, Dharma Prakash

    2017-01-01

    This inspiring textbook provides an introduction to wireless technologies for sensors, explores potential use of sensors for numerous applications, and utilizes probability theory and mathematical methods as a means of embedding sensors in system design. It discusses the need for synchronization and underlying limitations, inter-relation between given coverage and connectivity to number of sensors needed, and the use of geometrical distance to determine location of the base station for data collection and explore use of anchor nodes for relative position determination of sensors. The book explores energy conservation, communication using TCP, the need for clustering and data aggregation, and residual energy determination and energy harvesting. It covers key topics of sensor communication like mobile base stations and relay nodes, delay-tolerant sensor networks, and remote sensing and possible applications. The book defines routing methods and do performance evaluation for random and regular sensor topology an...

  6. Smart sensors and systems

    CERN Document Server

    Kyung, Chong-Min; Yasuura, Hiroto; Liu, Yongpan

    2015-01-01

     This book describes for readers technology used for effective sensing of our physical world and intelligent processing techniques for sensed information, which are essential to the success of Internet of Things (IoTs).  The authors provide a multidisciplinary view of sensor technology from MEMS, biological, chemical, and electrical domains and showcase smart sensor systems in real applications including smart home, transportation, medical, environmental, agricultural, etc.  Unlike earlier books on sensors, this book will provide a “global” view on smart sensors covering abstraction levels from device, circuit, systems, and algorithms.  .

  7. Video copy protection and detection framework (VPD) for e-learning systems

    Science.gov (United States)

    ZandI, Babak; Doustarmoghaddam, Danial; Pour, Mahsa R.

    2013-03-01

    This Article reviews and compares the copyright issues related to the digital video files, which can be categorized as contended based and Digital watermarking copy Detection. Then we describe how to protect a digital video by using a special Video data hiding method and algorithm. We also discuss how to detect the copy right of the file, Based on expounding Direction of the technology of the video copy detection, and Combining with the own research results, brings forward a new video protection and copy detection approach in terms of plagiarism and e-learning systems using the video data hiding technology. Finally we introduce a framework for Video protection and detection in e-learning systems (VPD Framework).

  8. The modular integrated video system (MIVS)

    International Nuclear Information System (INIS)

    Schneider, S.L.; Sonnier, C.S.

    1987-01-01

    The Modular Integrated Video System (MIVS) is being developed for the International Atomic Energy Agency (IAEA) for use in facilities where mains power is available and the separation of the Camera and Recording Control Unit is desirable. The system is being developed under the US Program for Technical Assistance to the IAEA Safeguards (POTAS). The MIVS is designed to be a user-friendly system, allowing operation with minimal effort and training. The system software, through the use of a Liquid Crystal Display (LCD) and four soft keys, leads the inspector through the setup procedures to accomplish the intended surveillance or maintenance task. Review of surveillance data is accomplished with the use of a Portable Review Station. This Review Station will aid the inspector in the review process and determine the number of missed video scenes during a surveillance period

  9. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  10. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Science.gov (United States)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  11. Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Laboratory

    Science.gov (United States)

    Brewster, L.; Johnston, A.; Howard, R.; Mitchell, J.; Cryan, S.

    2007-01-01

    The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success of the Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-loop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of"pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL

  12. Adaptive Motor Resistance Video Game Exercise Apparatus and Method of Use Thereof

    Science.gov (United States)

    Reich, Alton (Inventor); Shaw, James (Inventor)

    2015-01-01

    The invention comprises a method and/or an apparatus using computer configured exercise equipment and an electric motor provided physical resistance in conjunction with a game system, such as a video game system, where the exercise system provides real physical resistance to a user interface. Results of user interaction with the user interface are integrated into a video game, such as running on a game console. The resistance system comprises: a subject interface, software control, a controller, an electric servo assist/resist motor, an actuator, and/or a subject sensor. The system provides actual physical interaction with a resistance device as input to the game console and game run thereon.

  13. Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Lab

    Science.gov (United States)

    Brewster, Linda L.; Howard, Richard T.; Johnston, A. S.; Carrington, Connie; Mitchell, Jennifer D.; Cryan, Scott P.

    2008-01-01

    The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success ofthe Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor-proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-Ioop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of "pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL

  14. Wearable Sensor Systems for Infants

    Directory of Open Access Journals (Sweden)

    Zhihua Zhu

    2015-02-01

    Full Text Available Continuous health status monitoring of infants is achieved with the development and fusion of wearable sensing technologies, wireless communication techniques and a low energy-consumption microprocessor with high performance data processing algorithms. As a clinical tool applied in the constant monitoring of physiological parameters of infants, wearable sensor systems for infants are able to transmit the information obtained inside an infant’s body to clinicians or parents. Moreover, such systems with integrated sensors can perceive external threats such as falling or drowning and warn parents immediately. Firstly, the paper reviews some available wearable sensor systems for infants; secondly, we introduce the different modules of the framework in the sensor systems; lastly, the methods and techniques applied in the wearable sensor systems are summarized and discussed. The latest research and achievements have been highlighted in this paper and the meaningful applications in healthcare and behavior analysis are also presented. Moreover, we give a lucid perspective of the development of wearable sensor systems for infants in the future.

  15. Water-Cut Sensor System

    KAUST Repository

    Karimi, Muhammad Akram; Shamim, Atif; Arsalan, Muhammad

    2018-01-01

    Provided in some embodiments is a method of manufacturing a pipe conformable water-cut sensors system. Provided in some embodiments is method for manufacturing a water-cut sensor system that includes providing a helical T-resonator, a helical ground

  16. High speed video recording system on a chip for detonation jet engine testing

    Directory of Open Access Journals (Sweden)

    Samsonov Alexander N.

    2018-01-01

    Full Text Available This article describes system on a chip development for high speed video recording purposes. Current research was started due to difficulties in selection of FPGAs and CPUs which include wide bandwidth, high speed and high number of multipliers for real time signal analysis implementation. Current trend of high density silicon device integration will result soon in a hybrid sensor-controller-memory circuit packed in a single chip. This research was the first step in a series of experiments in manufacturing of hybrid devices. The current task is high level syntheses of high speed logic and CPU core in an FPGA. The work resulted in FPGA-based prototype implementation and examination.

  17. NSTX High Temperature Sensor Systems

    International Nuclear Information System (INIS)

    McCormack, B.; Kugel, H.W.; Goranson, P.; Kaita, R.

    1999-01-01

    The design of the more than 300 in-vessel sensor systems for the National Spherical Torus Experiment (NSTX) has encountered several challenging fusion reactor diagnostic issues involving high temperatures and space constraints. This has resulted in unique miniature, high temperature in-vessel sensor systems mounted in small spaces behind plasma facing armor tiles, and they are prototypical of possible high power reactor first-wall applications. In the Center Stack, Divertor, Passive Plate, and vessel wall regions, the small magnetic sensors, large magnetic sensors, flux loops, Rogowski Coils, thermocouples, and Langmuir Probes are qualified for 600 degrees C operation. This rating will accommodate both peak rear-face graphite tile temperatures during operations and the 350 degrees C bake-out conditions. Similar sensor systems including flux loops, on other vacuum vessel regions are qualified for 350 degrees C operation. Cabling from the sensors embedded in the graphite tiles follows narrow routes to exit the vessel. The detailed sensor design and installation methods of these diagnostic systems developed for high-powered ST operation are discussed

  18. Wireless network system based multi-non-invasive sensors for smart home

    Science.gov (United States)

    Issa Ahmed, Rudhwan

    There are several techniques that have been implemented for smart homes usage; however, most of these techniques are limited to a few sensors. Many of these methods neither meet the needs of the user nor are cost-effective. This thesis discusses the design, development, and implementation of a wireless network system, based on multi-non-invasive sensors for smart home environments. This system has the potential to be used as a means to accurately, and remotely, determine the activities of daily living by continuously monitoring relatively simple parameters that measure the interaction between users and their surrounding environment. We designed and developed a prototype system to meet the specific needs of the elderly population. Unlike audio-video based health monitoring systems (which have associated problems such as the encroachment of privacy), the developed system's distinct features ensure privacy and are almost invisible to the occupants, thus increasing the acceptance levels of this system in household environments. The developed system not only achieved high levels of accuracy, but it is also portable, easy to use, cost-effective, and requires low data rates and less power compared to other wireless devices such as Wi-Fi, Bluetooth, wireless USB, Ultra wideband (UWB), or Infrared (IR) wireless. Field testing of the prototype system was conducted at different locations inside and outside of the Minto Building (Centre for Advanced Studies in Engineering at Carleton University) as well as other locations, such as the washroom, kitchen, and living room of a prototype apartment. The main goal of the testing was to determine the range of the prototype system and the functionality of each sensor in different environments. After it was verified that the system operated well in all of the tested environments, data were then collected at the different locations for analysis and interpretation in order to identify the activities of daily living of an occupant.

  19. Video Conference System that Keeps Mutual Eye Contact Among Participants

    Directory of Open Access Journals (Sweden)

    Masahiko Yahagi

    2011-10-01

    Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.

  20. Bioinspired Sensor Systems

    Directory of Open Access Journals (Sweden)

    Manel del Valle

    2011-10-01

    Full Text Available This editorial summarizes and classifies the contributions presented by different authors to the special issue of the journal Sensors dedicated to Bioinspired Sensor Systems. From the coupling of sensor arrays or networks, plus computer processing abilities, new applications to mimic or to complement human senses are arising in the context of ambient intelligence. Principles used, and illustrative study cases have been presented permitting readers to grasp the current status of the field.

  1. Specialized video systems for use in underground storage tanks

    International Nuclear Information System (INIS)

    Heckendom, F.M.; Robinson, C.W.; Anderson, E.K.; Pardini, A.F.

    1994-01-01

    The Robotics Development Groups at the Savannah River Site and the Hanford site have developed remote video and photography systems for deployment in underground radioactive waste storage tanks at Department of Energy (DOE) sites as a part of the Office of Technology Development (OTD) program within DOE. Figure 1 shows the remote video/photography systems in a typical underground storage tank environment. Viewing and documenting the tank interiors and their associated annular spaces is an extremely valuable tool in characterizing their condition and contents and in controlling their remediation. Several specialized video/photography systems and robotic End Effectors have been fabricated that provide remote viewing and lighting. All are remotely deployable into and from the tank, and all viewing functions are remotely operated. Positioning all control components away from the facility prevents the potential for personnel exposure to radiation and contamination. Overview video systems, both monaural and stereo versions, include a camera, zoom lens, camera positioner, vertical deployment system, and positional feedback. Each independent video package can be inserted through a 100 mm (4 in.) diameter opening. A special attribute of these packages is their design to never get larger than the entry hole during operation and to be fully retrievable. The End Effector systems will be deployed on the large robotic Light Duty Utility Arm (LDUA) being developed by other portions of the OTD-DOE programs. The systems implement a multi-functional ''over the coax'' design that uses a single coaxial cable for all data and control signals over the more than 900 foot cable (or fiber optic) link

  2. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees

    Directory of Open Access Journals (Sweden)

    Paula Jimena Ramos Giraldo

    2017-04-01

    Full Text Available Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  3. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees.

    Science.gov (United States)

    Giraldo, Paula Jimena Ramos; Aguirre, Álvaro Guerrero; Muñoz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio

    2017-04-06

    Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: ( i ) Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and ( ii ) Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  4. Video auto stitching in multicamera surveillance system

    Science.gov (United States)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  5. Video Content Search System for Better Students Engagement in the Learning Process

    Directory of Open Access Journals (Sweden)

    Alanoud Alotaibi

    2014-12-01

    Full Text Available As a component of the e-learning educational process, content plays an essential role. Increasingly, the video-recorded lectures in e-learning systems are becoming more important to learners. In most cases, a single video-recorded lecture contains more than one topic or sub-topic. Therefore, to enable learners to find the desired topic and reduce learning time, e-learning systems need to provide a search capability for searching within the video content. This can be accomplished by enabling learners to identify the video or portion that contains a keyword they are looking for. This research aims to develop Video Content Search system to facilitate searching in educational videos and its contents. Preliminary results of an experimentation were conducted on a selected university course. All students needed a system to avoid time-wasting problem of watching long videos with no significant benefit. The statistics showed that the number of learners increased during the experiment. Future work will include studying impact of VCS system on students’ performance and satisfaction.

  6. Hybrid compression of video with graphics in DTV communication systems

    OpenAIRE

    Schaar, van der, M.; With, de, P.H.N.

    2000-01-01

    Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an efficient transmission/storage of these mixed video and graphics signals and, at this emerging stage of DTV systems, introducing new standards is not desired. Nevertheless, in the professional video...

  7. Facial expression system on video using widrow hoff

    Science.gov (United States)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  8. A Secure and Robust Object-Based Video Authentication System

    Directory of Open Access Journals (Sweden)

    He Dajun

    2004-01-01

    Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.

  9. A Retrieval Optimized Surveillance Video Storage System for Campus Application Scenarios

    Directory of Open Access Journals (Sweden)

    Shengcheng Ma

    2018-01-01

    Full Text Available This paper investigates and analyzes the characteristics of video data and puts forward a campus surveillance video storage system with the university campus as the specific application environment. Aiming at the challenge that the content-based video retrieval response time is too long, the key-frame index subsystem is designed. The key frame of the video can reflect the main content of the video. Extracted from the video, key frames are associated with the metadata information to establish the storage index. The key-frame index is used in lookup operations while querying. This method can greatly reduce the amount of video data reading and effectively improves the query’s efficiency. From the above, we model the storage system by a stochastic Petri net (SPN and verify the promotion of query performance by quantitative analysis.

  10. Video-based measurements for wireless capsule endoscope tracking

    International Nuclear Information System (INIS)

    Spyrou, Evaggelos; Iakovidis, Dimitris K

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions. (paper)

  11. Video-based measurements for wireless capsule endoscope tracking

    Science.gov (United States)

    Spyrou, Evaggelos; Iakovidis, Dimitris K.

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.

  12. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    Science.gov (United States)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little

  13. ARTIST: advanced radiation-tolerant information and sensor system for teleoperation

    International Nuclear Information System (INIS)

    Schmidt, D.; Pathe, V.; Ostertag, M.; Dittrich, F.; Dumbreck, A.; Sirat, G.; Katzouraki, M.

    1993-01-01

    ARTIST integrates a stereoscopic camera and a rangefinder as sensor package into a high-precision pan-and-tilt head and represents the recorded data in a clear and comprehensive way for telemanipulation and control tasks as well as for remote driving. The sensors as well as the pan-and-tilt head are radiation-tolerant so they can be used in nuclear environments. The pan-and-tilt head and work station are completely configured and developed with the emphasis on multisensor integration, real-time video processing and graphical position representation. An efficient man-machine-interface appropriate software is included. (author)

  14. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  15. Distributed sensor coordination for advanced energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Tumer, Kagan [Oregon State Univ., Corvallis, OR (United States). School of Mechanical, Industrial and Manufacturing Engineering

    2015-03-12

    Motivation: The ability to collect key system level information is critical to the safe, efficient and reliable operation of advanced power systems. Recent advances in sensor technology have enabled some level of decision making directly at the sensor level. However, coordinating large numbers of sensors, particularly heterogeneous sensors, to achieve system level objectives such as predicting plant efficiency, reducing downtime or predicting outages requires sophisticated coordination algorithms. Indeed, a critical issue in such systems is how to ensure the interaction of a large number of heterogenous system components do not interfere with one another and lead to undesirable behavior. Objectives and Contributions: The long-term objective of this work is to provide sensor deployment, coordination and networking algorithms for large numbers of sensors to ensure the safe, reliable, and robust operation of advanced energy systems. Our two specific objectives are to: 1. Derive sensor performance metrics for heterogeneous sensor networks. 2. Demonstrate effectiveness, scalability and reconfigurability of heterogeneous sensor network in advanced power systems. The key technical contribution of this work is to push the coordination step to the design of the objective functions of the sensors, allowing networks of heterogeneous sensors to be controlled. By ensuring that the control and coordination is not specific to particular sensor hardware, this approach enables the design and operation of large heterogeneous sensor networks. In addition to the coordination coordination mechanism, this approach allows the system to be reconfigured in response to changing needs (e.g., sudden external events requiring new responses) or changing sensor network characteristics (e.g., sudden changes to plant condition). Impact: The impact of this work extends to a large class of problems relevant to the National Energy Technology Laboratory including sensor placement, heterogeneous sensor

  16. Real-time geo-referenced video mosaicking with the MATISSE system

    DEFF Research Database (Denmark)

    Vincent, Anne-Gaelle; Pessel, Nathalie; Borgetto, Manon

    This paper presents the MATISSE system: Mosaicking Advanced Technologies Integrated in a Single Software Environment. This system aims at producing in-line and off-line geo-referenced video mosaics of seabed given a video input and navigation data. It is based upon several techniques of image...

  17. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    Science.gov (United States)

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  18. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  19. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming.

    Science.gov (United States)

    Rosenberg, Michael; Thornton, Ashleigh L; Lay, Brendan S; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS), during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART), to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months). During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results.

  20. Parameter and state estimation using audio and video signals

    OpenAIRE

    Evestedt, Magnus

    2005-01-01

    The complexity of industrial systems and the mathematical models to describe them increases. In many cases point sensors are no longer sufficient to provide controllers and monitoring instruments with the information necessary for operation. The need for other types of information, such as audio and video, has grown. Suitable applications range in a broad spectrum from microelectromechanical systems and bio-medical engineering to papermaking and steel production. This thesis is divided into f...

  1. Visualization of heavy ion-induced charge production in a CMOS image sensor

    CERN Document Server

    Végh, J; Klamra, W; Molnár, J; Norlin, LO; Novák, D; Sánchez-Crespo, A; Van der Marel, J; Fenyvesi, A; Valastyan, I; Sipos, A

    2004-01-01

    A commercial CMOS image sensor was irradiated with heavy ion beams in the several MeV energy range. The image sensor is equipped with a standard video output. The data were collected on-line through frame grabbing and analysed off-line after digitisation. It was shown that the response of the image sensor to the heavy ion bombardment varied with the type and energy of the projectiles. The sensor will be used for the CMS Barrel Muon Alignment system.

  2. A video wireless capsule endoscopy system powered wirelessly: design, analysis and experiment

    International Nuclear Information System (INIS)

    Pan, Guobing; Chen, Jiaoliao; Xin, Wenhui; Yan, Guozheng

    2011-01-01

    Wireless capsule endoscopy (WCE), as a relatively new technology, has brought about a revolution in the diagnosis of gastrointestinal (GI) tract diseases. However, the existing WCE systems are not widely applied in clinic because of the low frame rate and low image resolution. A video WCE system based on a wireless power supply is developed in this paper. This WCE system consists of a video capsule endoscope (CE), a wireless power transmission device, a receiving box and an image processing station. Powered wirelessly, the video CE has the abilities of imaging the GI tract and transmitting the images wirelessly at a frame rate of 30 frames per second (f/s). A mathematical prototype was built to analyze the power transmission system, and some experiments were performed to test the capability of energy transferring. The results showed that the wireless electric power supply system had the ability to transfer more than 136 mW power, which was enough for the working of a video CE. In in vitro experiments, the video CE produced clear images of the small intestine of a pig with the resolution of 320 × 240, and transmitted NTSC format video outside the body. Because of the wireless power supply, the video WCE system with high frame rate and high resolution becomes feasible, and provides a novel solution for the diagnosis of the GI tract in clinic

  3. Guide to Synchronization of Video Systems to IRIG Timing

    Science.gov (United States)

    1992-07-01

    and industry. 1-2 CHAPTER 2 SYNCHRONISATION Before delving into the details of synchronization , a review is needed of the reasons for synchronizing ... Synchronization of Video Systems to IRIG Timing Optical Systems Group Range Commanders Council White Sands Missile Range, NM 88002-5110 RCC Document 456-92 Range...This document addresses a broad field of video synchronization to IRIG timing with emphasis on color synchronization . This document deals with

  4. Adaptive intrusion data system (AIDS) software routines

    International Nuclear Information System (INIS)

    Corlis, N.E.

    1980-07-01

    An Adaptive Intrusion Data System (AIDS) was developed to collect information from intrusion alarm sensors as part of an evaluation system to improve sensor performance. AIDS is a unique digital data-compression, storage, and formatting system; it also incorporates a capability for video selection and recording for assessment of the sensors monitored by the system. The system is software reprogrammable to numerous configurations that may be used for the collection of environmental, bilevel, analog, and video data. This report describes the software routines that control the different AIDS data-collection modes, the diagnostic programs to test the operating hardware, and the data format. Sample data printouts are also included

  5. Distributed video data fusion and mining

    Science.gov (United States)

    Chang, Edward Y.; Wang, Yuan-Fang; Rodoplu, Volkan

    2004-09-01

    This paper presents an event sensing paradigm for intelligent event-analysis in a wireless, ad hoc, multi-camera, video surveillance system. In particilar, we present statistical methods that we have developed to support three aspects of event sensing: 1) energy-efficient, resource-conserving, and robust sensor data fusion and analysis, 2) intelligent event modeling and recognition, and 3) rapid deployment, dynamic configuration, and continuous operation of the camera networks. We outline our preliminary results, and discuss future directions that research might take.

  6. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  7. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming

    Science.gov (United States)

    Rosenberg, Michael; Lay, Brendan S.; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS), during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART), to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months). During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p < .01) than the sidestep (r = 0.87, p < .01), although both were excellent. Excellent reliability was also found between human raters and the KART system for the jump (r = 0.84, p, .01) and moderate reliability for sidestep (r = 0.6983, p < .01) during game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results. PMID:27442437

  8. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming.

    Directory of Open Access Journals (Sweden)

    Michael Rosenberg

    Full Text Available While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS, during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART, to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months. During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p < .01 than the sidestep (r = 0.87, p < .01, although both were excellent. Excellent reliability was also found between human raters and the KART system for the jump (r = 0.84, p, .01 and moderate reliability for sidestep (r = 0.6983, p < .01 during game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results.

  9. High pressure fiber optic sensor system

    Science.gov (United States)

    Guida, Renato; Xia, Hua; Lee, Boon K; Dekate, Sachin N

    2013-11-26

    The present application provides a fiber optic sensor system. The fiber optic sensor system may include a small diameter bellows, a large diameter bellows, and a fiber optic pressure sensor attached to the small diameter bellows. Contraction of the large diameter bellows under an applied pressure may cause the small diameter bellows to expand such that the fiber optic pressure sensor may measure the applied pressure.

  10. Integrated multisensor perimeter detection systems

    Science.gov (United States)

    Kent, P. J.; Fretwell, P.; Barrett, D. J.; Faulkner, D. A.

    2007-10-01

    The report describes the results of a multi-year programme of research aimed at the development of an integrated multi-sensor perimeter detection system capable of being deployed at an operational site. The research was driven by end user requirements in protective security, particularly in threat detection and assessment, where effective capability was either not available or prohibitively expensive. Novel video analytics have been designed to provide robust detection of pedestrians in clutter while new radar detection and tracking algorithms provide wide area day/night surveillance. A modular integrated architecture based on commercially available components has been developed. A graphical user interface allows intuitive interaction and visualisation with the sensors. The fusion of video, radar and other sensor data provides the basis of a threat detection capability for real life conditions. The system was designed to be modular and extendable in order to accommodate future and legacy surveillance sensors. The current sensor mix includes stereoscopic video cameras, mmWave ground movement radar, CCTV and a commercially available perimeter detection cable. The paper outlines the development of the system and describes the lessons learnt after deployment in a pilot trial.

  11. An automated data exploitation system for airborne sensors

    Science.gov (United States)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    Advanced wide area persistent surveillance (WAPS) sensor systems on manned or unmanned airborne vehicles are essential for wide-area urban security monitoring in order to protect our people and our warfighter from terrorist attacks. Currently, human (imagery) analysts process huge data collections from full motion video (FMV) for data exploitation and analysis (real-time and forensic), providing slow and inaccurate results. An Automated Data Exploitation System (ADES) is urgently needed. In this paper, we present a recently developed ADES for airborne vehicles under heavy urban background clutter conditions. This system includes four processes: (1) fast image registration, stabilization, and mosaicking; (2) advanced non-linear morphological moving target detection; (3) robust multiple target (vehicles, dismounts, and human) tracking (up to 100 target tracks); and (4) moving or static target/object recognition (super-resolution). Test results with real FMV data indicate that our ADES can reliably detect, track, and recognize multiple vehicles under heavy urban background clutters. Furthermore, our example shows that ADES as a baseline platform can provide capability for vehicle abnormal behavior detection to help imagery analysts quickly trace down potential threats and crimes.

  12. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    Science.gov (United States)

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  13. User-assisted video segmentation system for visual communication

    Science.gov (United States)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  14. The ASDEX upgrade digital video processing system for real-time machine protection

    Energy Technology Data Exchange (ETDEWEB)

    Drube, Reinhard, E-mail: reinhard.drube@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Neu, Gregor [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard H.; Lüddecke, Klaus [Unlimited Computer Systems GmbH, Seeshaupterstr. 15, 82393 Iffeldorf (Germany); Lunt, Tilmann; Herrmann, Albrecht [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany)

    2013-11-15

    Highlights: • We present the Real-Time Video diagnostic system of ASDEX Upgrade. • We show the implemented image processing algorithms for machine protection. • The way to achieve a robust operating multi-threading Real-Time system is described. -- Abstract: This paper describes the design, implementation, and operation of the Video Real-Time (VRT) diagnostic system of the ASDEX Upgrade plasma experiment and its integration with the ASDEX Upgrade Discharge Control System (DCS). Hot spots produced by heating systems erroneously or accidentally hitting the vessel walls, or from objects in the vessel reaching into the plasma outer border, show up as bright areas in the videos during and after the reaction. A system to prevent damage to the machine by allowing for intervention in a running discharge of the experiment was proposed and implemented. The VRT was implemented on a multi-core real-time Linux system. Up to 16 analog video channels (color and b/w) are acquired and multiple regions of interest (ROI) are processed on each video frame. Detected critical states can be used to initiate appropriate reactions – e.g. gracefully terminate the discharge. The system has been in routine operation since 2007.

  15. Big Data Analytics: Challenges And Applications For Text, Audio, Video, And Social Media Data

    OpenAIRE

    Jai Prakash Verma; Smita Agrawal; Bankim Patel; Atul Patel

    2016-01-01

    All types of machine automated systems are generating large amount of data in different forms like statistical, text, audio, video, sensor, and bio-metric data that emerges the term Big Data. In this paper we are discussing issues, challenges, and application of these types of Big Data with the consideration of big data dimensions. Here we are discussing social media data analytics, content based analytics, text data analytics, audio, and video data analytics their issues and expected applica...

  16. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  17. Practical system for generating digital mixed reality video holograms.

    Science.gov (United States)

    Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il

    2016-07-10

    We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.

  18. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  19. Video integrated measurement system. [Diagnostic display devices

    Energy Technology Data Exchange (ETDEWEB)

    Spector, B.; Eilbert, L.; Finando, S.; Fukuda, F.

    1982-06-01

    A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides an innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.

  20. Utilization of KSC Present Broadband Communications Data System for Digital Video Services

    Science.gov (United States)

    Andrawis, Alfred S.

    2002-01-01

    This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.

  1. Modification and Validation of an Automotive Data Processing Unit, Compessed Video System, and Communications Equipment

    Energy Technology Data Exchange (ETDEWEB)

    Carter, R.J.

    1997-04-01

    The primary purpose of the "modification and validation of an automotive data processing unit (DPU), compressed video system, and communications equipment" cooperative research and development agreement (CRADA) was to modify and validate both hardware and software, developed by Scientific Atlanta, Incorporated (S-A) for defense applications (e.g., rotary-wing airplanes), for the commercial sector surface transportation domain (i.e., automobiles and trucks). S-A also furnished a state-of-the-art compressed video digital storage and retrieval system (CVDSRS), and off-the-shelf data storage and transmission equipment to support the data acquisition system for crash avoidance research (DASCAR) project conducted by Oak Ridge National Laboratory (ORNL). In turn, S-A received access to hardware and technology related to DASCAR. DASCAR was subsequently removed completely and installation was repeated a number of times to gain an accurate idea of complete installation, operation, and removal of DASCAR. Upon satisfactory completion of the DASCAR construction and preliminary shakedown, ORNL provided NHTSA with an operational demonstration of DASCAR at their East Liberty, OH test facility. The demonstration included an on-the-road demonstration of the entire data acquisition system using NHTSA'S test track. In addition, the demonstration also consisted of a briefing, containing the following: ORNL generated a plan for validating the prototype data acquisition system with regard to: removal of DASCAR from an existing vehicle, and installation and calibration in other vehicles; reliability of the sensors and systems; data collection and transmission process (data integrity); impact on the drivability of the vehicle and obtrusiveness of the system to the driver; data analysis procedures; conspicuousness of the vehicle to other drivers; and DASCAR installation and removal training and documentation. In order to identify any operational problems not captured by the systems

  2. Risk analysis of a video-surveillance system

    NARCIS (Netherlands)

    Rothkrantz, L.; Lefter, I.

    2011-01-01

    The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are

  3. A Physical Activity Reference Data-Set Recorded from Older Adults Using Body-Worn Inertial Sensors and Video Technology—The ADAPT Study Data-Set

    Directory of Open Access Journals (Sweden)

    Alan Kevin Bourke

    2017-03-01

    Full Text Available Physical activity monitoring algorithms are often developed using conditions that do not represent real-life activities, not developed using the target population, or not labelled to a high enough resolution to capture the true detail of human movement. We have designed a semi-structured supervised laboratory-based activity protocol and an unsupervised free-living activity protocol and recorded 20 older adults performing both protocols while wearing up to 12 body-worn sensors. Subjects’ movements were recorded using synchronised cameras (≥25 fps, both deployed in a laboratory environment to capture the in-lab portion of the protocol and a body-worn camera for out-of-lab activities. Video labelling of the subjects’ movements was performed by five raters using 11 different category labels. The overall level of agreement was high (percentage of agreement >90.05%, and Cohen’s Kappa, corrected kappa, Krippendorff’s alpha and Fleiss’ kappa >0.86. A total of 43.92 h of activities were recorded, including 9.52 h of in-lab and 34.41 h of out-of-lab activities. A total of 88.37% and 152.01% of planned transitions were recorded during the in-lab and out-of-lab scenarios, respectively. This study has produced the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate (≥25 fps video labelled data recorded in a free-living environment from older adults living independently. This dataset is suitable for validation of existing activity classification systems and development of new activity classification algorithms.

  4. A novel video recommendation system based on efficient retrieval of human actions

    Science.gov (United States)

    Ramezani, Mohsen; Yaghmaee, Farzin

    2016-09-01

    In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.

  5. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Kumar, R.

    2002-01-01

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  6. Design and implementation of PAVEMON: A GIS web-based pavement monitoring system based on large amounts of heterogeneous sensors data

    Science.gov (United States)

    Shahini Shamsabadi, Salar

    A web-based PAVEment MONitoring system, PAVEMON, is a GIS oriented platform for accommodating, representing, and leveraging data from a multi-modal mobile sensor system. Stated sensor system consists of acoustic, optical, electromagnetic, and GPS sensors and is capable of producing as much as 1 Terabyte of data per day. Multi-channel raw sensor data (microphone, accelerometer, tire pressure sensor, video) and processed results (road profile, crack density, international roughness index, micro texture depth, etc.) are outputs of this sensor system. By correlating the sensor measurements and positioning data collected in tight time synchronization, PAVEMON attaches a spatial component to all the datasets. These spatially indexed outputs are placed into an Oracle database which integrates seamlessly with PAVEMON's web-based system. The web-based system of PAVEMON consists of two major modules: 1) a GIS module for visualizing and spatial analysis of pavement condition information layers, and 2) a decision-support module for managing maintenance and repair (Mℝ) activities and predicting future budget needs. PAVEMON weaves together sensor data with third-party climate and traffic information from the National Oceanic and Atmospheric Administration (NOAA) and Long Term Pavement Performance (LTPP) databases for an organized data driven approach to conduct pavement management activities. PAVEMON deals with heterogeneous and redundant observations by fusing them for jointly-derived higher-confidence results. A prominent example of the fusion algorithms developed within PAVEMON is a data fusion algorithm used for estimating the overall pavement conditions in terms of ASTM's Pavement Condition Index (PCI). PAVEMON predicts PCI by undertaking a statistical fusion approach and selecting a subset of all the sensor measurements. Other fusion algorithms include noise-removal algorithms to remove false negatives in the sensor data in addition to fusion algorithms developed for

  7. Adaptive Sensing Based on Profiles for Sensor Systems

    Directory of Open Access Journals (Sweden)

    Yoshiteru Ishida

    2009-10-01

    Full Text Available This paper proposes a profile-based sensing framework for adaptive sensor systems based on models that relate possibly heterogeneous sensor data and profiles generated by the models to detect events. With these concepts, three phases for building the sensor systems are extracted from two examples: a combustion control sensor system for an automobile engine, and a sensor system for home security. The three phases are: modeling, profiling, and managing trade-offs. Designing and building a sensor system involves mapping the signals to a model to achieve a given mission.

  8. Sensor-guided threat countermeasure system

    Science.gov (United States)

    Stuart, Brent C.; Hackel, Lloyd A.; Hermann, Mark R.; Armstrong, James P.

    2012-12-25

    A countermeasure system for use by a target to protect against an incoming sensor-guided threat. The system includes a laser system for producing a broadband beam and means for directing the broadband beam from the target to the threat. The countermeasure system comprises the steps of producing a broadband beam and directing the broad band beam from the target to blind or confuse the incoming sensor-guided threat.

  9. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  10. Sensor system for web inspection

    Science.gov (United States)

    Sleefe, Gerard E.; Rudnick, Thomas J.; Novak, James L.

    2002-01-01

    A system for electrically measuring variations over a flexible web has a capacitive sensor including spaced electrically conductive, transmit and receive electrodes mounted on a flexible substrate. The sensor is held against a flexible web with sufficient force to deflect the path of the web, which moves relative to the sensor.

  11. Evaluation of video detection systems, volume 1 : effects of configuration changes in the performance of video detection systems.

    Science.gov (United States)

    2009-10-01

    The effects of modifying the configuration of three video detection (VD) systems (Iteris, Autoscope, and Peek) : are evaluated in daytime and nighttime conditions. Four types of errors were used: false, missed, stuck-on, and : dropped calls. The thre...

  12. System-level Modeling of Wireless Integrated Sensor Networks

    DEFF Research Database (Denmark)

    Virk, Kashif M.; Hansen, Knud; Madsen, Jan

    2005-01-01

    Wireless integrated sensor networks have emerged as a promising infrastructure for a new generation of monitoring and tracking applications. In order to efficiently utilize the extremely limited resources of wireless sensor nodes, accurate modeling of the key aspects of wireless sensor networks...... is necessary so that system-level design decisions can be made about the hardware and the software (applications and real-time operating system) architecture of sensor nodes. In this paper, we present a SystemC-based abstract modeling framework that enables system-level modeling of sensor network behavior...... by modeling the applications, real-time operating system, sensors, processor, and radio transceiver at the sensor node level and environmental phenomena, including radio signal propagation, at the sensor network level. We demonstrate the potential of our modeling framework by simulating and analyzing a small...

  13. Workflow-Oriented Cyberinfrastructure for Sensor Data Analytics

    Science.gov (United States)

    Orcutt, J. A.; Rajasekar, A.; Moore, R. W.; Vernon, F.

    2015-12-01

    Sensor streams comprise an increasingly large part of Earth Science data. Analytics based on sensor data require an easy way to perform operations such as acquisition, conversion to physical units, metadata linking, sensor fusion, analysis and visualization on distributed sensor streams. Furthermore, embedding real-time sensor data into scientific workflows is of growing interest. We have implemented a scalable networked architecture that can be used to dynamically access packets of data in a stream from multiple sensors, and perform synthesis and analysis across a distributed network. Our system is based on the integrated Rule Oriented Data System (irods.org), which accesses sensor data from the Antelope Real Time Data System (brtt.com), and provides virtualized access to collections of data streams. We integrate real-time data streaming from different sources, collected for different purposes, on different time and spatial scales, and sensed by different methods. iRODS, noted for its policy-oriented data management, brings to sensor processing features and facilities such as single sign-on, third party access control lists ( ACLs), location transparency, logical resource naming, and server-side modeling capabilities while reducing the burden on sensor network operators. Rich integrated metadata support also makes it straightforward to discover data streams of interest and maintain data provenance. The workflow support in iRODS readily integrates sensor processing into any analytical pipeline. The system is developed as part of the NSF-funded Datanet Federation Consortium (datafed.org). APIs for selecting, opening, reaping and closing sensor streams are provided, along with other helper functions to associate metadata and convert sensor packets into NetCDF and JSON formats. Near real-time sensor data including seismic sensors, environmental sensors, LIDAR and video streams are available through this interface. A system for archiving sensor data and metadata in Net

  14. A practical implementation of free viewpoint video system for soccer games

    Science.gov (United States)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  15. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  16. A video imaging system and related control hardware for nuclear safeguards surveillance applications

    International Nuclear Information System (INIS)

    Whichello, J.V.

    1987-03-01

    A novel video surveillance system has been developed for safeguards applications in nuclear installations. The hardware was tested at a small experimental enrichment facility located at the Lucas Heights Research Laboratories. The system uses digital video techniques to store, encode and transmit still television pictures over the public telephone network to a receiver located in the Australian Safeguards Office at Kings Cross, Sydney. A decoded, reconstructed picture is then obtained using a second video frame store. A computer-controlled video cassette recorder is used automatically to archive the surveillance pictures. The design of the surveillance system is described with examples of its operation

  17. Realization on the interactive remote video conference system based on multi-Agent

    Directory of Open Access Journals (Sweden)

    Zheng Yan

    2016-01-01

    Full Text Available To make people at different places participate in the same conference, speak and discuss freely, the interactive remote video conferencing system is designed and realized based on multi-Agent collaboration. FEC (forward error correction and tree P2P technology are firstly used to build a live conference structure to transfer audio and video data; then the branch conference port can participate to speak and discuss through the application of becoming a interactive focus; the introduction of multi-Agent collaboration technology improve the system robustness. The experiments showed that, under normal network conditions, the system can support 350 branch conference node simultaneously to make live broadcasting. The audio and video quality is smooth. It can carry out large-scale remote video conference.

  18. Remote control video cameras on a suborbital rocket

    International Nuclear Information System (INIS)

    Wessling, Francis C.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space

  19. Sensor Pods: Multi-Resolution Surveys from a Light Aircraft

    Directory of Open Access Journals (Sweden)

    Conor Cahalane

    2017-02-01

    Full Text Available Airborne remote sensing, whether performed from conventional aerial survey platforms such as light aircraft or the more recent Remotely Piloted Airborne Systems (RPAS has the ability to compliment mapping generated using earth-orbiting satellites, particularly for areas that may experience prolonged cloud cover. Traditional aerial platforms are costly but capture spectral resolution imagery over large areas. RPAS are relatively low-cost, and provide very-high resolution imagery but this is limited to small areas. We believe that we are the first group to retrofit these new, low-cost, lightweight sensors in a traditional aircraft. Unlike RPAS surveys which have a limited payload, this is the first time that a method has been designed to operate four distinct RPAS sensors simultaneously—hyperspectral, thermal, hyper, RGB, video. This means that imagery covering a broad range of the spectrum captured during a single survey, through different imaging capture techniques (frame, pushbroom, video can be applied to investigate different multiple aspects of the surrounding environment such as, soil moisture, vegetation vitality, topography or drainage, etc. In this paper, we present the initial results validating our innovative hybrid system adapting dedicated RPAS sensors for a light aircraft sensor pod, thereby providing the benefits of both methodologies. Simultaneous image capture with a Nikon D800E SLR and a series of dedicated RPAS sensors, including a FLIR thermal imager, a four-band multispectral camera and a 100-band hyperspectral imager was enabled by integration in a single sensor pod operating from a Cessna c172. However, to enable accurate sensor fusion for image analysis, each sensor must first be combined in a common vehicle coordinate system and a method for triggering, time-stamping and calculating the position/pose of each sensor at the time of image capture devised. Initial tests were carried out over agricultural regions with

  20. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    Science.gov (United States)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  1. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  2. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  3. Common bus multinode sensor system

    International Nuclear Information System (INIS)

    Kelly, T.F.; Naviasky, E.H.; Evans, W.P.; Jefferies, D.W.; Smith, J.R.

    1988-01-01

    This patent describes a nuclear power plant including a common bus multinode sensor system for sensors in the nuclear power plant, each sensor producing a sensor signal. The system consists of: a power supply providing power; a communication cable coupled to the power supply; plural remote sensor units coupled between the cable and one or more sensors, and comprising: a direct current power supply, connected to the cable and converting the power on the cable into direct current; an analog-to-digital converter connected to the direct current power supply; an oscillator reference; a filter; and an integrated circuit sensor interface connected to the direct current power supply, the analog-to-digital converter, the oscillator crystal and the filter, the interface comprising: a counter receiving a frequency designation word from external to the interface; a phase-frequency comparator connected to the counter; an oscillator connected to the oscillator reference; a timing counter connected to the oscillator, the phase/frequency comparator and the analog-to-digital converter; an analog multiplexer connectable to the sensors and the analog-to-digital converter, and connected to the timing counter; a shift register operatively connected to the timing counter and the analog-to-digital converter; an encoder connected to the shift register and connectable to the filter; and a voltage controlled oscillator connected to the filter and the cable

  4. Overview video diagnostics for the W7-X stellarator

    Energy Technology Data Exchange (ETDEWEB)

    Kocsis, G., E-mail: kocsis.gabor@wigner.mta.hu [Wigner RCP, RMI, Konkoly Thege 29-33, H-1121 Budapest (Hungary); Baross, T. [Wigner RCP, RMI, Konkoly Thege 29-33, H-1121 Budapest (Hungary); Biedermann, C. [Max-Planck-Institute for Plasma Physics, 17491 Greifswald (Germany); Bodnár, G.; Cseh, G.; Ilkei, T. [Wigner RCP, RMI, Konkoly Thege 29-33, H-1121 Budapest (Hungary); König, R.; Otte, M. [Max-Planck-Institute for Plasma Physics, 17491 Greifswald (Germany); Szabolics, T.; Szepesi, T.; Zoletnik, S. [Wigner RCP, RMI, Konkoly Thege 29-33, H-1121 Budapest (Hungary)

    2015-10-15

    Considering the requirements of the newly built Wendelstein 7-X stellarator a ten-channel overview video diagnostic system was developed and is presently under installation. The system covering the whole torus interior can be used not only to observe the plasma but also to detect irregular operational events which are dangerous for the stellarator itself and to send automatic warning for the machine safety. The ten tangential AEQ ports used by the diagnostic remain under atmospheric pressure, the vacuum/air interface is at the front window located at the plasma side of the AEQ port. The optical vacuum window is protected by a cooled pinhole. The Sensor Module (SM) of the intelligent camera (EDICAM) – developed especially for this purpose – is located directly behind the vacuum window. EDICAM is designed to simultaneously record several regions of interest of its CMOS sensor with different frame rate and to detect various predefined events in real time. The air cooled SM is fixed by a docking mechanism which can preserve the pointing of the view. EDICAM can withstand the magnetic field (∼3 T), the neutron and gamma fluxes expected in the AEQ port. In order to adopt the new features of the video diagnostics system both control and data acquisition and visualization and data processing softwares are developed.

  5. Overview video diagnostics for the W7-X stellarator

    International Nuclear Information System (INIS)

    Kocsis, G.; Baross, T.; Biedermann, C.; Bodnár, G.; Cseh, G.; Ilkei, T.; König, R.; Otte, M.; Szabolics, T.; Szepesi, T.; Zoletnik, S.

    2015-01-01

    Considering the requirements of the newly built Wendelstein 7-X stellarator a ten-channel overview video diagnostic system was developed and is presently under installation. The system covering the whole torus interior can be used not only to observe the plasma but also to detect irregular operational events which are dangerous for the stellarator itself and to send automatic warning for the machine safety. The ten tangential AEQ ports used by the diagnostic remain under atmospheric pressure, the vacuum/air interface is at the front window located at the plasma side of the AEQ port. The optical vacuum window is protected by a cooled pinhole. The Sensor Module (SM) of the intelligent camera (EDICAM) – developed especially for this purpose – is located directly behind the vacuum window. EDICAM is designed to simultaneously record several regions of interest of its CMOS sensor with different frame rate and to detect various predefined events in real time. The air cooled SM is fixed by a docking mechanism which can preserve the pointing of the view. EDICAM can withstand the magnetic field (∼3 T), the neutron and gamma fluxes expected in the AEQ port. In order to adopt the new features of the video diagnostics system both control and data acquisition and visualization and data processing softwares are developed.

  6. Proximity sensor system development. CRADA final report

    Energy Technology Data Exchange (ETDEWEB)

    Haley, D.C. [Oak Ridge National Lab., TN (United States); Pigoski, T.M. [Merrit Systems, Inc. (United States)

    1998-01-01

    Lockheed Martin Energy Research Corporation (LMERC) and Merritt Systems, Inc. (MSI) entered into a Cooperative Research and Development Agreement (CRADA) for the development and demonstration of a compact, modular proximity sensing system suitable for application to a wide class of manipulator systems operated in support of environmental restoration and waste management activities. In teleoperated modes, proximity sensing provides the manipulator operator continuous information regarding the proximity of the manipulator to objects in the workspace. In teleoperated and robotic modes, proximity sensing provides added safety through the implementation of active whole arm collision avoidance capabilities. Oak Ridge National Laboratory (ORNL), managed by LMERC for the United States Department of Energy (DOE), has developed an application specific integrated circuit (ASIC) design for the electronics required to support a modular whole arm proximity sensing system based on the use of capacitive sensors developed at Sandia National Laboratories. The use of ASIC technology greatly reduces the size of the electronics required to support the selected sensor types allowing deployment of many small sensor nodes over a large area of the manipulator surface to provide maximum sensor coverage. The ASIC design also provides a communication interface to support sensor commands from and sensor data transmission to a distributed processing system which allows modular implementation and operation of the sensor system. MSI is a commercial small business specializing in proximity sensing systems based upon infrared and acoustic sensors.

  7. Proximity sensor system development. CRADA final report

    International Nuclear Information System (INIS)

    Haley, D.C.; Pigoski, T.M.

    1998-01-01

    Lockheed Martin Energy Research Corporation (LMERC) and Merritt Systems, Inc. (MSI) entered into a Cooperative Research and Development Agreement (CRADA) for the development and demonstration of a compact, modular proximity sensing system suitable for application to a wide class of manipulator systems operated in support of environmental restoration and waste management activities. In teleoperated modes, proximity sensing provides the manipulator operator continuous information regarding the proximity of the manipulator to objects in the workspace. In teleoperated and robotic modes, proximity sensing provides added safety through the implementation of active whole arm collision avoidance capabilities. Oak Ridge National Laboratory (ORNL), managed by LMERC for the United States Department of Energy (DOE), has developed an application specific integrated circuit (ASIC) design for the electronics required to support a modular whole arm proximity sensing system based on the use of capacitive sensors developed at Sandia National Laboratories. The use of ASIC technology greatly reduces the size of the electronics required to support the selected sensor types allowing deployment of many small sensor nodes over a large area of the manipulator surface to provide maximum sensor coverage. The ASIC design also provides a communication interface to support sensor commands from and sensor data transmission to a distributed processing system which allows modular implementation and operation of the sensor system. MSI is a commercial small business specializing in proximity sensing systems based upon infrared and acoustic sensors

  8. High-speed holographic correlation system for video identification on the internet

    Science.gov (United States)

    Watanabe, Eriko; Ikeda, Kanami; Kodate, Kashiko

    2013-12-01

    Automatic video identification is important for indexing, search purposes, and removing illegal material on the Internet. By combining a high-speed correlation engine and web-scanning technology, we developed the Fast Recognition Correlation system (FReCs), a video identification system for the Internet. FReCs is an application thatsearches through a number of websites with user-generated content (UGC) and detects video content that violates copyright law. In this paper, we describe the FReCs configuration and an approach to investigating UGC websites using FReCs. The paper also illustrates the combination of FReCs with an optical correlation system, which is capable of easily replacing a digital authorization sever in FReCs with optical correlation.

  9. Video-based real-time on-street parking occupancy detection system

    Science.gov (United States)

    Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang

    2013-10-01

    Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.

  10. VME Switch for CERN's PS Analog Video System

    CERN Document Server

    Acebes, I; Heinze, W; Lewis, J; Serrano, J

    2003-01-01

    Analog video signal switching is used in CERN's Proton Synchrotron (PS) complex to route the video signals coming from Beam Diagnostics systems to the Meyrin Control Room (MCR). Traditionally, this has been done with custom electromechanical relay-based cards controlled serially via CAMAC crates. In order to improve the robustness and maintainability of the system, while keeping it analog to preserve the low latency, a VME card based on Analog Devices' AD8116 analog matrix chip has been developed. Video signals go into the front panel and exit the switch through the P2 connector of the VME backplane. The module is a 16 input, 32 output matrix. Larger matrices can be built using more modules and bussing their outputs together, thanks to the high impedance feature of the AD8116. Another VME module takes the selected signals from the P2 connector and performs automatic gain to send them at nominal output level through its front panel. This paper discusses both designs and presents experimental test results.

  11. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  12. Optical fiber sensors: Systems and applications. Volume 2

    Science.gov (United States)

    Culshaw, Brian; Dakin, John

    State-of-the-art fiber-optic (FO) sensors and their applications are described in chapters contributed by leading experts. Consideration is given to interferometers, FO gyros, intensity- and wavelength-based sensors and optical actuators, Si in FO sensors, point-sensor multiplexing principles, and distributed FO sensor systems. Also examined are chemical, biochemical, and medical sensors; physical and chemical sensors for process control; FO-sensor applications in the marine and aerospace industries; FO-sensor monitoring systems for security and safety, structural integrity, NDE, and the electric-power industry; and the market situation for FO-sensor technology. Diagrams, drawings, graphs, and photographs are provided.

  13. Exterior field evaluation of new generation video motion detection systems

    International Nuclear Information System (INIS)

    Malone, T.P.

    1988-01-01

    Recent advancements in video motion detection (VMD) system design and technology have resulted in several new commercial VMD systems. Considerable interest in the new VMD systems has been generated because the systems are advertised to work effectively in exterior applications. Previous VMD systems, when used in an exterior environment, tended to have very high nuisance alarm rates due to weather conditions, wildlife activity and lighting variations. The new VMD systems advertise more advanced processing of the incoming video signal which is aimed at rejecting exterior environmental nuisance alarm sources while maintaining a high detection capability. This paper discusses the results of field testing, in an exterior environment, of two new VMD systems

  14. Robust Solar Position Sensor for Tracking Systems

    DEFF Research Database (Denmark)

    Ritchie, Ewen; Argeseanu, Alin; Leban, Krisztina Monika

    2009-01-01

    The paper proposes a new solar position sensor used in tracking system control. The main advantages of the new solution are the robustness and the economical aspect. Positioning accuracy of the tracking system that uses the new sensor is better than 1°. The new sensor uses the ancient principle...... of the solar clock. The sensitive elements are eight ordinary photo-resistors. It is important to note that all the sensors are not selected simultaneously. It is not necessary for sensor operating characteristics to be quasi-identical because the sensor principle is based on extreme operating duty measurement...... (bright or dark). In addition, the proposed solar sensor significantly simplifies the operation of the tracking control device....

  15. Iterative Multiview Side Information for Enhanced Reconstruction in Distributed Video Coding

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Distributed video coding (DVC is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW and Wyner and Ziv (WZ. DVC entails low-complexity encoders as well as separate encoding of correlated video sources. This is particularly attractive for multiview camera systems in video surveillance and camera sensor network applications, where low complexity is required at the encoder. In addition, the separate encoding of the sources implies no communication between the cameras in a practical scenario. This is an advantage since communication is time and power consuming and requires complex networking. In this work, different intercamera estimation techniques for side information (SI generation are explored and compared in terms of estimating quality, complexity, and rate distortion (RD performance. Further, a technique called iterative multiview side information (IMSI is introduced, where the final SI is used in an iterative reconstruction process. The simulation results show that IMSI significantly improves the RD performance for video with significant motion and activity. Furthermore, DVC outperforms AVC/H.264 Intra for video with average and low motion but it is still inferior to the Inter No Motion and Inter Motion modes.

  16. Specialized video systems for use in waste tanks

    International Nuclear Information System (INIS)

    Anderson, E.K.; Robinson, C.W.; Heckendorn, F.M.

    1992-01-01

    The Robotics Development Group at the Savannah River Site is developing a remote video system for use in underground radioactive waste storage tanks at the Savannah River Site, as a portion of its site support role. Viewing of the tank interiors and their associated annular spaces is an extremely valuable tool in assessing their condition and controlling their operation. Several specialized video systems have been built that provide remote viewing and lighting, including remotely controlled tank entry and exit. Positioning all control components away from the facility prevents the potential for personnel exposure to radiation and contamination. The SRS waste tanks are nominal 4.5 million liter (1.3 million gallon) underground tanks used to store liquid high level radioactive waste generated by the site, awaiting final disposal. The typical waste tank (Figure 1) is of flattened shape (i.e. wider than high). The tanks sit in a dry secondary containment pan. The annular space between the tank wall and the secondary containment wall is continuously monitored for liquid intrusion and periodically inspected and documented. The latter was historically accomplished with remote still photography. The video systems includes camera, zoom lens, camera positioner, and vertical deployment. The assembly enters through a 125 mm (5 in) diameter opening. A special attribute of the systems is they never get larger than the entry hole during camera aiming etc. and can always be retrieved. The latest systems are easily deployable to a remote setup point and can extend down vertically 15 meters (50ft). The systems are expected to be a valuable asset to tank operations

  17. Data Acquisition Using Xbox Kinect Sensor

    Science.gov (United States)

    Ballester, Jorge; Pheatt, Charles B.

    2012-12-01

    The study of motion is central in physics education and has taken many forms as technology has provided numerous methods to acquire data. For example, the analysis of still or moving images is particularly effective in discussions of two-dimensional motion. Introductory laboratory measurement methods have progressed through water clocks, spark timers, stopwatches, Polaroid cameras, videocassette recorders, ultrasonic devices, digital video, and most recently high-speed digital video. In this paper we explore the use of newly available imaging technology for the study of motion. The Kinect sensor was introduced in November 2010 by Microsoft as an accessory for the Xbox 360 video game system. Shortly after the product release, a software framework became available that allows a personal computer to capture output from a stand-alone Kinect.2 Author-developed data acquisition software for the Kinect and several experimental examples are discussed.

  18. Degraded visual environment image/video quality metrics

    Science.gov (United States)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  19. Status, recent developments and perspective of TINE-powered video system, release 3

    International Nuclear Information System (INIS)

    Weisse, S.; Melkumyan, D.; Duval, P.

    2012-01-01

    Experience has shown that imaging software and hardware installations at accelerator facilities needs to be changed, adapted and updated on a semi-permanent basis. On this premise the component-based core architecture of Video System 3 was founded. In design and implementation, emphasis was, is, and will be put on flexibility, performance, low latency, modularity, inter operability, use of open source, ease of use as well as reuse, good documentation and multi-platform capability. In the past year, a milestone was reached as Video System 3 entered production-level at PITZ, Hasylab and PETRA III. Since then, the development path has been more strongly influenced by production-level experience and customer feedback. In this contribution, we describe the current status, layout, recent developments and perspective of the Video System. Focus will be put on integration of recording and playback of video sequences to Archive/DAQ, a standalone installation of the Video System on a notebook as well as experiences running on Windows 7-64 bit. In addition, new client-side multi-platform GUI/application developments using Java are about to hit the surface. Last but not least it must be mentioned that although the implementation of Release 3 is integrated into the TINE control system, it is modular enough so that integration into other control systems can be considered. (authors)

  20. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  1. A Vehicle Steering Recognition System Based on Low-Cost Smartphone Sensors

    Directory of Open Access Journals (Sweden)

    Xinhua Liu

    2017-03-01

    Full Text Available Recognizing how a vehicle is steered and then alerting drivers in real time is of utmost importance to the vehicle and driver’s safety, since fatal accidents are often caused by dangerous vehicle maneuvers, such as rapid turns, fast lane-changes, etc. Existing solutions using video or in-vehicle sensors have been employed to identify dangerous vehicle maneuvers, but these methods are subject to the effects of the environmental elements or the hardware is very costly. In the mobile computing era, smartphones have become key tools to develop innovative mobile context-aware systems. In this paper, we present a recognition system for dangerous vehicle steering based on the low-cost sensors found in a smartphone: i.e., the gyroscope and the accelerometer. To identify vehicle steering maneuvers, we focus on the vehicle’s angular velocity, which is characterized by gyroscope data from a smartphone mounted in the vehicle. Three steering maneuvers including turns, lane-changes and U-turns are defined, and a vehicle angular velocity matching algorithm based on Fast Dynamic Time Warping (FastDTW is adopted to recognize the vehicle steering. The results of extensive experiments show that the average accuracy rate of the presented recognition reaches 95%, which implies that the proposed smartphone-based method is suitable for recognizing dangerous vehicle steering maneuvers.

  2. Applications of artificial intelligence to space station: General purpose intelligent sensor interface

    Science.gov (United States)

    Mckee, James W.

    1988-01-01

    This final report describes the accomplishments of the General Purpose Intelligent Sensor Interface task of the Applications of Artificial Intelligence to Space Station grant for the period from October 1, 1987 through September 30, 1988. Portions of the First Biannual Report not revised will not be included but only referenced. The goal is to develop an intelligent sensor system that will simplify the design and development of expert systems using sensors of the physical phenomena as a source of data. This research will concentrate on the integration of image processing sensors and voice processing sensors with a computer designed for expert system development. The result of this research will be the design and documentation of a system in which the user will not need to be an expert in such areas as image processing algorithms, local area networks, image processor hardware selection or interfacing, television camera selection, voice recognition hardware selection, or analog signal processing. The user will be able to access data from video or voice sensors through standard LISP statements without any need to know about the sensor hardware or software.

  3. UrtheCast Second-Generation Earth Observation Sensors

    Science.gov (United States)

    Beckett, K.

    2015-04-01

    UrtheCast's Second-Generation state-of-the-art Earth Observation (EO) remote sensing platform will be hosted on the NASA segment of International Space Station (ISS). This platform comprises a high-resolution dual-mode (pushbroom and video) optical camera and a dual-band (X and L) Synthetic Aperture RADAR (SAR) instrument. These new sensors will complement the firstgeneration medium-resolution pushbroom and high-definition video cameras that were mounted on the Russian segment of the ISS in early 2014. The new cameras are expected to be launched to the ISS in late 2017 via the Space Exploration Technologies Corporation Dragon spacecraft. The Canadarm will then be used to install the remote sensing platform onto a CBM (Common Berthing Mechanism) hatch on Node 3, allowing the sensor electronics to be accessible from the inside of the station, thus limiting their exposure to the space environment and allowing for future capability upgrades. The UrtheCast second-generation system will be able to take full advantage of the strengths that each of the individual sensors offers, such that the data exploitation capabilities of the combined sensors is significantly greater than from either sensor alone. This represents a truly novel platform that will lead to significant advances in many other Earth Observation applications such as environmental monitoring, energy and natural resources management, and humanitarian response, with data availability anticipated to begin after commissioning is completed in early 2018.

  4. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  5. Temporally coherent 4D video segmentation for teleconferencing

    Science.gov (United States)

    Ehmann, Jana; Guleryuz, Onur G.

    2013-09-01

    We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.

  6. Development of the video streaming system for the radiation safety training

    International Nuclear Information System (INIS)

    Uemura, Jitsuya

    2005-01-01

    Radiation workers have to receive the radiation safety training every year. It is very hard for them to receive the training within a limited chance of training. Then, we developed the new training system using the video streaming technique and opened the web page for the training on our homepage. Every worker is available to receive the video lecture at any time and at any place by using his PC via internet. After watching the video, the worker should receive the completion examination. It he can pass the examination, he was registered as a radiation worker by the database system for radiation control. (author)

  7. A portable readout system for silicon microstrip sensors

    International Nuclear Information System (INIS)

    Marco-Hernandez, Ricardo

    2010-01-01

    This system can measure the collected charge in one or two microstrip silicon sensors by reading out all the channels of the sensor(s), up to 256. The system is able to operate with different types (p- and n-type) and different sizes (up to 3 cm 2 ) of microstrip silicon sensors, both irradiated and non-irradiated. Heavily irradiated sensors will be used at the Super Large Hadron Collider, so this system can be used to research the performance of microstrip silicon sensors in conditions as similar as possible to the Super Large Hadron Collider operating conditions. The system has two main parts: a hardware part and a software part. The hardware part acquires the sensor signals either from external trigger inputs, in case of a radioactive source setup is used, or from a synchronised trigger output generated by the system, if a laser setup is used. The software controls the system and processes the data acquired from the sensors in order to store it in an adequate format. The main characteristics of the system are described. Results of measurements acquired with n- and p-type detectors using both the laser and the radioactive source setup are also presented and discussed.

  8. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    Science.gov (United States)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  9. Irradiance sensors for solar systems

    Energy Technology Data Exchange (ETDEWEB)

    Storch, A.; Schindl, J. [Oesterreichisches Forschungs- und Pruefzentrum Arsenal GesmbH, Vienna (Austria). Business Unit Renewable Energy

    2004-07-01

    The presented project surveyed the quality of irradiance sensors used for applications in solar systems. By analysing an outdoor measurement, the accuracies of ten commercially available irradiance sensors were evaluated, comparing their results to those of a calibrated Kipp and Zonen pyranometer CM21. Furthermore, as a simple method for improving the quality of the results, for each sensor an irradiance-calibration was carried out and examined for its effectiveness. (orig.)

  10. Error propagation analysis for a sensor system

    International Nuclear Information System (INIS)

    Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.

    1976-01-01

    As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm

  11. Distributed Sensor Coordination for Advanced Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Tumer, Kagan [Oregon State Univ., Corvallis, OR (United States)

    2013-07-31

    The ability to collect key system level information is critical to the safe, efficient and reliable operation of advanced energy systems. With recent advances in sensor development, it is now possible to push some level of decision making directly to computationally sophisticated sensors, rather than wait for data to arrive to a massive centralized location before a decision is made. This type of approach relies on networked sensors (called “agents” from here on) to actively collect and process data, and provide key control decisions to significantly improve both the quality/relevance of the collected data and the associating decision making. The technological bottlenecks for such sensor networks stem from a lack of mathematics and algorithms to manage the systems, rather than difficulties associated with building and deploying them. Indeed, traditional sensor coordination strategies do not provide adequate solutions for this problem. Passive data collection methods (e.g., large sensor webs) can scale to large systems, but are generally not suited to highly dynamic environments, such as advanced energy systems, where crucial decisions may need to be reached quickly and locally. Approaches based on local decisions on the other hand cannot guarantee that each agent performing its task (maximize an agent objective) will lead to good network wide solution (maximize a network objective) without invoking cumbersome coordination routines. There is currently a lack of algorithms that will enable self-organization and blend the efficiency of local decision making with the system level guarantees of global decision making, particularly when the systems operate in dynamic and stochastic environments. In this work we addressed this critical gap and provided a comprehensive solution to the problem of sensor coordination to ensure the safe, reliable, and robust operation of advanced energy systems. The differentiating aspect of the proposed work is in shifting the focus

  12. The Radio Frequency Health Node Wireless Sensor System

    Science.gov (United States)

    Valencia, J. Emilio; Stanley, Priscilla C.; Mackey, Paul J.

    2009-01-01

    The Radio Frequency Health Node (RFHN) wireless sensor system differs from other wireless sensor systems in ways originally intended to enhance utility as an instrumentation system for a spacecraft. The RFHN can also be adapted to use in terrestrial applications in which there are requirements for operational flexibility and integrability into higher-level instrumentation and data acquisition systems. As shown in the figure, the heart of the system is the RFHN, which is a unit that passes commands and data between (1) one or more commercially available wireless sensor units (optionally, also including wired sensor units) and (2) command and data interfaces with a local control computer that may be part of the spacecraft or other engineering system in which the wireless sensor system is installed. In turn, the local control computer can be in radio or wire communication with a remote control computer that may be part of a higher-level system. The remote control computer, acting via the local control computer and the RFHN, cannot only monitor readout data from the sensor units but can also remotely configure (program or reprogram) the RFHN and the sensor units during operation. In a spacecraft application, the RFHN and the sensor units can also be configured more nearly directly, prior to launch, via a serial interface that includes an umbilical cable between the spacecraft and ground support equipment. In either case, the RFHN wireless sensor system has the flexibility to be configured, as required, with different numbers and types of sensors for different applications. The RFHN can be used to effect realtime transfer of data from, and commands to, the wireless sensor units. It can also store data for later retrieval by an external computer. The RFHN communicates with the wireless sensor units via a radio transceiver module. The modular design of the RFHN makes it possible to add radio transceiver modules as needed to accommodate additional sets of wireless sensor

  13. Interactive design of patient-oriented video-games for rehabilitation: concept and application.

    Science.gov (United States)

    Lupinacci, Giorgia; Gatti, Gianluca; Melegari, Corrado; Fontana, Saverio

    2018-04-01

    Serious video-games are innovative tools used to train the motor skills of subjects affected by neurological disorders. They are often developed to train a specific type of patients and the rules of the game are standardly defined. A system that allows the therapist to design highly patient-oriented video-games, without specific informatics skills, is proposed. The system consists of one personal computer, two screens, a Kinect™ sensor and a specific software developed here for the design of the video-games. It was tested with the collaboration of three therapists and six patients, and two questionnaires were filled in by each patient to evaluate the appreciation of the rehabilitative sessions. The therapists learned easily how to use the system, and no serious difficulties were encountered by the patients. The questionnaires showed an overall good satisfaction by the patients and highlighted the key-role of the therapist in involving the patients during the rehabilitative session. It was found that the proposed system is effective for developing patient-oriented video-games for rehabilitation. The two main advantages are that the therapist is allowed to (i) develop personalized video-games without informatics skills and (ii) adapt the game settings to patients affected by different pathologies. Implications for rehabilitation Virtual reality and serious video games offer the opportunity to transform the traditional therapy into a more pleasant experience, allowing patients to train their motor and cognitive skills. Both the therapists and the patients should be involved in the development of rehabilitative solutions to be highly patient-oriented. A system for the design of rehabilitative games by the therapist is described and the feedback of three therapists and six patients is reported.

  14. Toward 3D-IPTV: design and implementation of a stereoscopic and multiple-perspective video streaming system

    Science.gov (United States)

    Petrovic, Goran; Farin, Dirk; de With, Peter H. N.

    2008-02-01

    3D-Video systems allow a user to perceive depth in the viewed scene and to display the scene from arbitrary viewpoints interactively and on-demand. This paper presents a prototype implementation of a 3D-video streaming system using an IP network. The architecture of our streaming system is layered, where each information layer conveys a single coded video signal or coded scene-description data. We demonstrate the benefits of a layered architecture with two examples: (a) stereoscopic video streaming, (b) monoscopic video streaming with remote multiple-perspective rendering. Our implementation experiments confirm that prototyping 3D-video streaming systems is possible with today's software and hardware. Furthermore, our current operational prototype demonstrates that highly heterogeneous clients can coexist in the system, ranging from auto-stereoscopic 3D displays to resource-constrained mobile devices.

  15. Sensor Technologies for Intelligent Transportation Systems.

    Science.gov (United States)

    Guerrero-Ibáñez, Juan; Zeadally, Sherali; Contreras-Castillo, Juan

    2018-04-16

    Modern society faces serious problems with transportation systems, including but not limited to traffic congestion, safety, and pollution. Information communication technologies have gained increasing attention and importance in modern transportation systems. Automotive manufacturers are developing in-vehicle sensors and their applications in different areas including safety, traffic management, and infotainment. Government institutions are implementing roadside infrastructures such as cameras and sensors to collect data about environmental and traffic conditions. By seamlessly integrating vehicles and sensing devices, their sensing and communication capabilities can be leveraged to achieve smart and intelligent transportation systems. We discuss how sensor technology can be integrated with the transportation infrastructure to achieve a sustainable Intelligent Transportation System (ITS) and how safety, traffic control and infotainment applications can benefit from multiple sensors deployed in different elements of an ITS. Finally, we discuss some of the challenges that need to be addressed to enable a fully operational and cooperative ITS environment.

  16. Sensor Technologies for Intelligent Transportation Systems

    Science.gov (United States)

    Guerrero-Ibáñez, Juan; Zeadally, Sherali

    2018-01-01

    Modern society faces serious problems with transportation systems, including but not limited to traffic congestion, safety, and pollution. Information communication technologies have gained increasing attention and importance in modern transportation systems. Automotive manufacturers are developing in-vehicle sensors and their applications in different areas including safety, traffic management, and infotainment. Government institutions are implementing roadside infrastructures such as cameras and sensors to collect data about environmental and traffic conditions. By seamlessly integrating vehicles and sensing devices, their sensing and communication capabilities can be leveraged to achieve smart and intelligent transportation systems. We discuss how sensor technology can be integrated with the transportation infrastructure to achieve a sustainable Intelligent Transportation System (ITS) and how safety, traffic control and infotainment applications can benefit from multiple sensors deployed in different elements of an ITS. Finally, we discuss some of the challenges that need to be addressed to enable a fully operational and cooperative ITS environment. PMID:29659524

  17. Sensor Technologies for Intelligent Transportation Systems

    Directory of Open Access Journals (Sweden)

    Juan Guerrero-Ibáñez

    2018-04-01

    Full Text Available Modern society faces serious problems with transportation systems, including but not limited to traffic congestion, safety, and pollution. Information communication technologies have gained increasing attention and importance in modern transportation systems. Automotive manufacturers are developing in-vehicle sensors and their applications in different areas including safety, traffic management, and infotainment. Government institutions are implementing roadside infrastructures such as cameras and sensors to collect data about environmental and traffic conditions. By seamlessly integrating vehicles and sensing devices, their sensing and communication capabilities can be leveraged to achieve smart and intelligent transportation systems. We discuss how sensor technology can be integrated with the transportation infrastructure to achieve a sustainable Intelligent Transportation System (ITS and how safety, traffic control and infotainment applications can benefit from multiple sensors deployed in different elements of an ITS. Finally, we discuss some of the challenges that need to be addressed to enable a fully operational and cooperative ITS environment.

  18. Optimal sensor configuration for complex systems

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    configuration is based on maximizing the overall sensor response while minimizing the correlation among the sensor outputs. The procedure for sensor configuration is based on simultaneous perturbation stochastic approximation (SPSA). SPSA avoids the need for detailed modeling of the sensor response by simply......Considers the problem of sensor configuration for complex systems. Our approach involves definition of an appropriate optimality criterion or performance measure, and description of an efficient and practical algorithm for achieving the optimality objective. The criterion for optimal sensor...... relying on observed responses as obtained by limited experimentation with test sensor configurations. We illustrate the approach with the optimal placement of acoustic sensors for signal detection in structures. This includes both a computer simulation study for an aluminum plate, and real...

  19. Sensor Webs as Virtual Data Systems for Earth Science

    Science.gov (United States)

    Moe, K. L.; Sherwood, R.

    2008-05-01

    The NASA Earth Science Technology Office established a 3-year Advanced Information Systems Technology (AIST) development program in late 2006 to explore the technical challenges associated with integrating sensors, sensor networks, data assimilation and modeling components into virtual data systems called "sensor webs". The AIST sensor web program was initiated in response to a renewed emphasis on the sensor web concepts. In 2004, NASA proposed an Earth science vision for a more robust Earth observing system, coupled with remote sensing data analysis tools and advances in Earth system models. The AIST program is conducting the research and developing components to explore the technology infrastructure that will enable the visionary goals. A working statement for a NASA Earth science sensor web vision is the following: On-demand sensing of a broad array of environmental and ecological phenomena across a wide range of spatial and temporal scales, from a heterogeneous suite of sensors both in-situ and in orbit. Sensor webs will be dynamically organized to collect data, extract information from it, accept input from other sensor / forecast / tasking systems, interact with the environment based on what they detect or are tasked to perform, and communicate observations and results in real time. The focus on sensor webs is to develop the technology and prototypes to demonstrate the evolving sensor web capabilities. There are 35 AIST projects ranging from 1 to 3 years in duration addressing various aspects of sensor webs involving space sensors such as Earth Observing-1, in situ sensor networks such as the southern California earthquake network, and various modeling and forecasting systems. Some of these projects build on proof-of-concept demonstrations of sensor web capabilities like the EO-1 rapid fire response initially implemented in 2003. Other projects simulate future sensor web configurations to evaluate the effectiveness of sensor-model interactions for producing

  20. A remote educational system in medicine using digital video.

    Science.gov (United States)

    Hahm, Joon Soo; Lee, Hang Lak; Kim, Sun Il; Shimizu, Shuji; Choi, Ho Soon; Ko, Yong; Lee, Kyeong Geun; Kim, Tae Eun; Yun, Ji Won; Park, Yong Jin; Naoki, Nakashima; Koji, Okamura

    2007-03-01

    Telemedicine has opened the door to a wide range of learning experience and simultaneous feedback to doctors and students at various remote locations. However, there are limitations such as lack of approved international standards of ethics. The aim of our study was to establish a telemedical education system through the development of high quality images, using the digital transfer system on a high-speed network. Using telemedicine, surgical images can be sent not only to domestic areas but also abroad, and opinions regarding surgical procedures can be exchanged between the operation room and a remote place. The Asia Pacific Information Infrastrucuture (APII) link, a submarine cable between Busan and Fukuoka, was used to connect Korea with Japan, and Korea Advanced Research Network (KOREN) was used to connect Busan with Seoul. Teleconference and video streaming between Hanyang University Hospital in Seoul and Kyushu University Hospital in Japan were realized using Digital Video Transfer System (DVTS) over Ipv4 network. Four endoscopic surgeries were successfully transmitted between Seoul and Kyushu, while concomitant teleconferences took place between the two throughout the operations. Enough bandwidth of 60 Mbps could be kept for two-line transmissions. The quality of transmitted video image had no frame loss with a rate of 30 images per second. The sound was also clear, and time delay was less than 0.3 sec. Our experience has demonstrated the feasibility of domestic and international telemedicine. We have established an international medical network with high-quality video transmission over Internet protocol, which is easy to perform, reliable, and economical. Our network system may become a promising tool for worldwide telemedical communication in the future.

  1. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  2. Test on radiation-withstanding properties of sensors

    International Nuclear Information System (INIS)

    Yagi, Hideyuki; Kakuta, Tsunemi; Ara, Katsuyuki

    1986-01-01

    In order to use for the remote operation system or in-line measuring system in the facilities handling radioactive substances, the development of the sensors having strengthened radiation-withstanding performance has been advanced. As a part of it, efforts have been exerted to phenomenologically grasp the radiation effect on various sensors and their materials, and to acquire the basic data. Irradiation test was carried out on solid image pick-up elements, optical parts eddy current sensors, pressure sensitive rubber, photo-electric proximity sensors and others, and the knowledge on their deterioration was obtained. Besides, the sensors and video-cameras having improved radiation-withstanding performance were made for trial, and the performance was tested. The interim report on these test results is made. By a series of the irradiation tests reported here, the basic data required for giving the guideline to the development of radiation withstanding sensors were able to be obtained. But in the present irradiation test, the number of specimens was too small to assure the radiation withstanding performance. In order to improve further the radiation withstanding performance of these sensors, it is necessary to carry out the irradiation test on such elements as condensers, diodes and ICs to accumulate the basic data. (Kako, I.)

  3. Sensor system for fuel transport vehicle

    Science.gov (United States)

    Earl, Dennis Duncan; McIntyre, Timothy J.; West, David L.

    2016-03-22

    An exemplary sensor system for a fuel transport vehicle can comprise a fuel marker sensor positioned between a fuel storage chamber of the vehicle and an access valve for the fuel storage chamber of the vehicle. The fuel marker sensor can be configured to measure one or more characteristics of one or more fuel markers present in the fuel adjacent the sensor, such as when the marked fuel is unloaded at a retail station. The one or more characteristics can comprise concentration and/or identity of the one or more fuel markers in the fuel. Based on the measured characteristics of the one or more fuel markers, the sensor system can identify the fuel and/or can determine whether the fuel has been adulterated after the marked fuel was last measured, such as when the marked fuel was loaded into the vehicle.

  4. Solid-State Gas Sensors: Sensor System Challenges in the Civil Security Domain.

    Science.gov (United States)

    Müller, Gerhard; Hackner, Angelika; Beer, Sebastian; Göbel, Johann

    2016-01-20

    The detection of military high explosives and illicit drugs presents problems of paramount importance in the fields of counter terrorism and criminal investigation. Effectively dealing with such threats requires hand-portable, mobile and affordable instruments. The paper shows that solid-state gas sensors can contribute to the development of such instruments provided the sensors are incorporated into integrated sensor systems, which acquire the target substances in the form of particle residue from suspect objects and which process the collected residue through a sequence of particle sampling, solid-vapor conversion, vapor detection and signal treatment steps. Considering sensor systems with metal oxide gas sensors at the backend, it is demonstrated that significant gains in sensitivity, selectivity and speed of response can be attained when the threat substances are sampled in particle as opposed to vapor form.

  5. Integrated active sensor system for real time vibration monitoring.

    Science.gov (United States)

    Liang, Qijie; Yan, Xiaoqin; Liao, Xinqin; Cao, Shiyao; Lu, Shengnan; Zheng, Xin; Zhang, Yue

    2015-11-05

    We report a self-powered, lightweight and cost-effective active sensor system for vibration monitoring with multiplexed operation based on contact electrification between sensor and detected objects. The as-fabricated sensor matrix is capable of monitoring and mapping the vibration state of large amounts of units. The monitoring contents include: on-off state, vibration frequency and vibration amplitude of each unit. The active sensor system delivers a detection range of 0-60 Hz, high accuracy (relative error below 0.42%), long-term stability (10000 cycles). On the time dimension, the sensor can provide the vibration process memory by recording the outputs of the sensor system in an extend period of time. Besides, the developed sensor system can realize detection under contact mode and non-contact mode. Its high performance is not sensitive to the shape or the conductivity of the detected object. With these features, the active sensor system has great potential in automatic control, remote operation, surveillance and security systems.

  6. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  7. Fusion of Images from Dissimilar Sensor Systems

    National Research Council Canada - National Science Library

    Chow, Khin

    2004-01-01

    Different sensors exploit different regions of the electromagnetic spectrum; therefore a multi-sensor image fusion system can take full advantage of the complementary capabilities of individual sensors in the suit...

  8. Progress in passive submillimeter-wave video imaging

    Science.gov (United States)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2014-06-01

    Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.

  9. Development of wireless sensor network for landslide monitoring system

    International Nuclear Information System (INIS)

    Suryadi; Puranto, Prabowo; Adinanta, Hendra; Tohari, Adrin; Priambodo, Purnomo S

    2017-01-01

    A wireless sensor network has been developed to monitor soil movement of some observed areas periodically. The system consists of four nodes and one gateway which installed on a scope area of 0.2 Km 2 . Each of nodehastwo types of sensor,an inclinometer and an extensometer. An inclinometer sensor is used to measure the tilt of a structure while anextensometer sensor is used to measure the displacement of soil movement. Each of nodeisalso supported by awireless communication device, a solar power supply unit, and a microcontroller unit called sensor module. In this system, there is also gateway module as a main communication system consistinga wireless communication device, power supply unit, and rain gauge to measure the rainfall intensity of the observed area. Each sensor of inclinometer and extensometer isconnected to the sensor module in wiring system but sensor module iscommunicating with gateway in a wireless system. Those four nodes are alsoconnectedeach other in a wireless system collecting the data from inclinometer and extensometer sensors. Module Gateway istransmitting the instruction code to each sensor module one by one and collecting the data from them. Gateway module is an important part to communicate with not only sensor modules but also to the server. This wireless system wasdesigned toreducethe electric consumption powered by 80 WP solar panel and 55Ah battery. This system has been implemented in Pangalengan, Bandung, which has high intensity of rainfall and it can be seen on the website. (paper)

  10. Automatic Water Sensor Window Opening System

    KAUST Repository

    Percher, Michael

    2013-01-01

    A system can automatically open at least one window of a vehicle when the vehicle is being submerged in water. The system can include a water collector and a water sensor, and when the water sensor detects water in the water collector, at least one window of the vehicle opens.

  11. Automatic Water Sensor Window Opening System

    KAUST Repository

    Percher, Michael

    2013-12-05

    A system can automatically open at least one window of a vehicle when the vehicle is being submerged in water. The system can include a water collector and a water sensor, and when the water sensor detects water in the water collector, at least one window of the vehicle opens.

  12. Integrating IPix immersive video surveillance with unattended and remote monitoring (UNARM) systems

    International Nuclear Information System (INIS)

    Michel, K.D.; Klosterbuer, S.F.; Langner, D.C.

    2004-01-01

    Commercially available IPix cameras and software are being researched as a means by which an inspector can be virtually immersed into a nuclear facility. A single IPix camera can provide 360 by 180 degree views with full pan-tilt-zoom capability, and with no moving parts on the camera mount. Immersive video technology can be merged into the current Unattended and Remote Monitoring (UNARM) system, thereby providing an integrated system of monitoring capabilities that tie together radiation, video, isotopic analysis, Global Positioning System (GPS), etc. The integration of the immersive video capability with other monitoring methods already in place provides a significantly enhanced situational awareness to the International Atomic Energy Agency (IAEA) inspectors.

  13. Interactive video audio system: communication server for INDECT portal

    Science.gov (United States)

    Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem

    2014-05-01

    The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.

  14. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  15. Shape Distributions of Nonlinear Dynamical Systems for Video-Based Inference.

    Science.gov (United States)

    Venkataraman, Vinay; Turaga, Pavan

    2016-12-01

    This paper presents a shape-theoretic framework for dynamical analysis of nonlinear dynamical systems which appear frequently in several video-based inference tasks. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. A novel approach we propose is the use of descriptors of the shape of the dynamical attractor as a feature representation of nature of dynamics. The proposed framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. We illustrate our idea using nonlinear dynamical models such as Lorenz and Rossler systems, where our feature representations (shape distribution) support our hypothesis that the local shape of the reconstructed phase space can be used as a discriminative feature. Our experimental analyses on these models also indicate that the proposed framework show stability for different time-series lengths, which is useful when the available number of samples are small/variable. The specific applications of interest in this paper are: 1) activity recognition using motion capture and RGBD sensors, 2) activity quality assessment for applications in stroke rehabilitation, and 3) dynamical scene classification. We provide experimental validation through action and gesture recognition experiments on motion capture and Kinect datasets. In all these scenarios, we show experimental evidence of the favorable properties of the proposed representation.

  16. On-line video image processing system for real-time neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Fujine, S; Yoneda, K; Kanda, K [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.

    1983-09-15

    The neutron radiography system installed at the E-2 experimental hole of the KUR (Kyoto University Reactor) has been used for some NDT applications in the nuclear field. The on-line video image processing system of this facility is introduced in this paper. A 0.5 mm resolution in images was obtained by using a super high quality TV camera developed for X-radiography viewing a NE-426 neutron-sensitive scintillator. The image of the NE-426 on a CRT can be observed directly and visually, thus many test samples can be sequentially observed when necessary for industrial purposes. The video image signals from the TV camera are digitized, with a 33 ms delay, through a video A/D converter (ADC) and can be stored in the image buffer (32 KB DRAM) of a microcomputer (Z-80) system. The digitized pictures are taken with 16 levels of gray scale and resolved to 240 x 256 picture elements (pixels) on a monochrome CRT, with the capability also to display 16 distinct colors on a RGB video display. The direct image of this system could be satisfactory for penetrating the side plates to test MTR type reactor fuels and for the investigation of moving objects.

  17. Battery management system with distributed wireless sensors

    Science.gov (United States)

    Farmer, Joseph C.; Bandhauer, Todd M.

    2016-02-23

    A system for monitoring parameters of an energy storage system having a multiplicity of individual energy storage cells. A radio frequency identification and sensor unit is connected to each of the individual energy storage cells. The radio frequency identification and sensor unit operates to sense the parameter of each individual energy storage cell and provides radio frequency transmission of the parameters of each individual energy storage cell. A management system monitors the radio frequency transmissions from the radio frequency identification and sensor units for monitoring the parameters of the energy storage system.

  18. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  19. Optimal sensor configuration for complex systems

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    . The procedure for sensor configuration is based on the simultaneous perturbation stochastic approximation (SPSA) algorithm. SPSA avoids the need for detailed modeling of the sensor response by simply relying on the observed responses obtained by limited experimentation with test sensor configurations. We......The paper considers the problem of sensor configuration for complex systems with the aim of maximizing the useful information about certain quantities of interest. Our approach involves: 1) definition of an appropriate optimality criterion or performance measure; and 2) description of an efficient...... and practical algorithm for achieving the optimality objective. The criterion for optimal sensor configuration is based on maximizing the overall sensor response while minimizing the correlation among the sensor outputs, so as to minimize the redundant information being provided by the multiple sensors...

  20. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  1. Design and Optimization of the VideoWeb Wireless Camera Network

    Directory of Open Access Journals (Sweden)

    Nguyen HoangThanh

    2010-01-01

    Full Text Available Sensor networks have been a very active area of research in recent years. However, most of the sensors used in the development of these networks have been local and nonimaging sensors such as acoustics, seismic, vibration, temperature, humidity. The emerging development of video sensor networks poses its own set of unique challenges, including high-bandwidth and low latency requirements for real-time processing and control. This paper presents a systematic approach by detailing the design, implementation, and evaluation of a large-scale wireless camera network, suitable for a variety of practical real-time applications. We take into consideration issues related to hardware, software, control, architecture, network connectivity, performance evaluation, and data-processing strategies for the network. We also perform multiobjective optimization on settings such as video resolution and compression quality to provide insight into the performance trade-offs when configuring such a network and present lessons learned in the building and daily usage of the network.

  2. Design of a highly integrated video acquisition module for smart video flight unit development

    Science.gov (United States)

    Lebre, V.; Gasti, W.

    2017-11-01

    CCD and APS devices are widely used in space missions as instrument sensors and/or in Avionics units like star detectors/trackers. Therefore, various and numerous designs of video acquisition chains have been produced. Basically, a classical video acquisition chain is constituted of two main functional blocks: the Proximity Electronics (PEC), including detector drivers and the Analogue Processing Chain (APC) Electronics that embeds the ADC, a master sequencer and the host interface. Nowadays, low power technologies allow to improve the integration, radiometric performances and power budget optimisation of video units and to standardize video units design and development. To this end, ESA has initiated a development activity through a competitive process requesting the expertise of experienced actors in the field of high resolution electronics for earth observation and Scientific missions. THALES ALENIA SPACE has been granted this activity as a prime contractor through ESA contract called HIVAC that holds for Highly Integrated Video Acquisition Chain. This paper presents main objectives of the on going HIVAC project and focuses on the functionalities and performances offered by the usage of the under development HIVAC board for future optical instruments.

  3. Vibration welding system with thin film sensor

    Science.gov (United States)

    Cai, Wayne W; Abell, Jeffrey A; Li, Xiaochun; Choi, Hongseok; Zhao, Jingzhou

    2014-03-18

    A vibration welding system includes an anvil, a welding horn, a thin film sensor, and a process controller. The anvil and horn include working surfaces that contact a work piece during the welding process. The sensor measures a control value at the working surface. The measured control value is transmitted to the controller, which controls the system in part using the measured control value. The thin film sensor may include a plurality of thermopiles and thermocouples which collectively measure temperature and heat flux at the working surface. A method includes providing a welder device with a slot adjacent to a working surface of the welder device, inserting the thin film sensor into the slot, and using the sensor to measure a control value at the working surface. A process controller then controls the vibration welding system in part using the measured control value.

  4. Solid-State Gas Sensors: Sensor System Challenges in the Civil Security Domain

    Directory of Open Access Journals (Sweden)

    Gerhard Müller

    2016-01-01

    Full Text Available The detection of military high explosives and illicit drugs presents problems of paramount importance in the fields of counter terrorism and criminal investigation. Effectively dealing with such threats requires hand-portable, mobile and affordable instruments. The paper shows that solid-state gas sensors can contribute to the development of such instruments provided the sensors are incorporated into integrated sensor systems, which acquire the target substances in the form of particle residue from suspect objects and which process the collected residue through a sequence of particle sampling, solid-vapor conversion, vapor detection and signal treatment steps. Considering sensor systems with metal oxide gas sensors at the backend, it is demonstrated that significant gains in sensitivity, selectivity and speed of response can be attained when the threat substances are sampled in particle as opposed to vapor form.

  5. Virtual Video Prototyping for Healthcare Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Bossen, Claus; Lykke-Olesen, Andreas

    2002-01-01

    Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...... issues, since one cannot avoid paying attention to the physical, real-world constraints and to details in the usage-interaction between users and technology. From the users' perspective, during our evaluation of the virtual video prototype, we experienced how it enabled users to relate...

  6. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  7. Active Sensor Configuration Validation for Refrigeration Systems

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Blanke, Mogens; Niemann, Hans Henrik

    2010-01-01

    -diagnosis methods falling short on this problem, this paper suggests an active diagnosis procedure to isolate sensor faults at the commissioning stage, before normal operation has started. Using statistical methods, residuals are evaluated versus multiple hypothesis models in a minimization process to uniquely......Major faults in the commissioning phase of refrigeration systems are caused by defects related to sensors. With a number of similar sensors available that do not differ by type but only by spatial location in the plant, interchange of sensors is a common defect. With sensors being used quite...... differently by the control system, fault-finding is difficult in practice and defects are regularly causing commissioning delays at considerable expense. Validation and handling of faults in the sensor configuration are therefore essential to cut costs during commissioning. With passive fault...

  8. EDICAM fast video diagnostic installation on the COMPASS tokamak

    International Nuclear Information System (INIS)

    Szappanos, A.; Berta, M.; Hron, M.; Panek, R.; Stoeckel, J.; Tulipan, S.; Veres, G.; Weinzettl, V.; Zoletnik, S.

    2010-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed by the Hungarian Association and has been installed on the COMPASS tokamak in the Institute of Plasma Physics AS CR in Prague, during February 2009. The standalone system contains a data acquisition PC and a prototype sensor module of EDICAM. Appropriate optical system have been designed and adjusted for the local requirements, and a mechanical holder keeps the camera out of the magnetic field. The fast camera contains a monochrome CMOS sensor with advanced control features and spectral sensitivity in the visible range. A special web based control interface has been implemented using Java spring framework to provide the control features in a graphical user environment. Java native interface (JNI) is used to reach the driver functions and to collect the data stored by direct memory access (DMA). Using a built in real-time streaming server one can see the live video from the camera through any web browser in the intranet. The live video is distributed in a Motion Jpeg format using real-time streaming protocol (RTSP) and a Java applet have been written to show the movie on the client side. The control system contains basic image processing features and the 3D wireframe of the tokamak can be projected to the selected frames. A MatLab interface is also presented with advanced post processing and analysis features to make the raw data available for high level computing programs. In this contribution all the concepts of EDICAM control center and the functions of the distinct software modules are described.

  9. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  10. Description, operation, and diagnostic routines for the adaptive intrusion data system

    International Nuclear Information System (INIS)

    Corlis, N.E.; Johnson, C.S.

    1978-03-01

    An Adaptive Intrusion Data System (AIDS) was developed to collect data from intrusion alarm sensors as part of an evaluation system to improve sensor performance. AIDS is a unique digital data compression, storage, and formatting system. It also incorporates a capability for video selection and recording for assessment of the sensors monitored by the system. The system is software reprogrammable to numerous configurations that may be utilized for the collection of environmental, bi-metal, analog, and video data. This manual covers the procedures for operating AIDS. Instructions are given to guide the operator in software programming and control option selections required to program AIDS for data collection. Software diagnostic programs are included in this manual as a method of isolating system problems

  11. A neuro-fuzzy inference system for sensor monitoring

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2001-01-01

    A neuro-fuzzy inference system combined with the wavelet denoising, PCA (principal component analysis) and SPRT (sequential probability ratio test) methods has been developed to monitor the relevant sensor using the information of other sensors. The paramters of the neuro-fuzzy inference system which estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors

  12. Inexpensive remote video surveillance system with microcomputer and solar cells

    International Nuclear Information System (INIS)

    Guevara Betancourt, Edder

    2013-01-01

    A low-cost prototype is developed with a RPI plate for remote video surveillance. Additionally, the theoretical basis to provide energy independence have developed through solar cells and a battery bank. Some existing commercial monitoring systems are studied and analyzed, components such as: cameras, communication devices (WiFi and 3G), free software packages for video surveillance, control mechanisms and theory remote photovoltaic systems. A number of steps are developed to implement the module and install, configure and test each of the elements of hardware and software that make up the module, exploring the feasibility of providing intelligence to the system using the software chosen. Events that have been generated by motion detection have been simple, intuitive way to view, archive and extract. The implementation of the module by a microcomputer video surveillance and motion detection software (Zoneminder) has been an option for a lot of potential; as the platform for monitoring and recording data has provided all the tools to make a robust and secure surveillance. (author) [es

  13. Optical seismic sensor systems and methods

    Science.gov (United States)

    Beal, A. Craig; Cummings, Malcolm E.; Zavriyev, Anton; Christensen, Caleb A.; Lee, Keun

    2015-12-08

    Disclosed is an optical seismic sensor system for measuring seismic events in a geological formation, including a surface unit for generating and processing an optical signal, and a sensor device optically connected to the surface unit for receiving the optical signal over an optical conduit. The sensor device includes at least one sensor head for sensing a seismic disturbance from at least one direction during a deployment of the sensor device within a borehole of the geological formation. The sensor head includes a frame and a reference mass attached to the frame via at least one flexure, such that movement of the reference mass relative to the frame is constrained to a single predetermined path.

  14. Real-time video streaming system for LHD experiment using IP multicast

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Yamamoto, Takashi; Yoshida, Masanobu; Nagayama, Yoshio; Hasegawa, Makoto

    2009-01-01

    In order to accomplish smooth cooperation research, remote participation plays an important role. For this purpose, the authors have been developing various applications for remote participation for the LHD (Large Helical Device) experiments, such as Web interface for visualization of acquired data. The video streaming system is one of them. It is useful to grasp the status of the ongoing experiment remotely, and we provide the video images displayed in the control room to the remote users. However, usual streaming servers cannot send video images without delay. The delay changes depending on how to send the images, but even a little delay might become critical if the researchers use the images to adjust the diagnostic devices. One of the main causes of delay is the procedure of compressing and decompressing the images. Furthermore, commonly used video compression method is lossy; it removes less important information to reduce the size. However, lossy images cannot be used for physical analysis because the original information is lost. Therefore, video images for remote participation should be sent without compression in order to minimize the delay and to supply high quality images durable for physical analysis. However, sending uncompressed video images requires large network bandwidth. For example, sending 5 frames of 16bit color SXGA images a second requires 100Mbps. Furthermore, the video images must be sent to several remote sites simultaneously. It is hard for a server PC to handle such a large data. To cope with this problem, the authors adopted IP multicast to send video images to several remote sites at once. Because IP multicast packets are sent only to the network on which the clients want the data; the load of the server does not depend on the number of clients and the network load is reduced. In this paper, the authors discuss the feasibility of high bandwidth video streaming system using IP multicast. (author)

  15. Data compression systems for home-use digital video recording

    NARCIS (Netherlands)

    With, de P.H.N.; Breeuwer, M.; van Grinsven, P.A.M.

    1992-01-01

    The authors focus on image data compression techniques for digital recording. Image coding for storage equipment covers a large variety of systems because the applications differ considerably in nature. Video coding systems suitable for digital TV and HDTV recording and digital electronic still

  16. Sensor Arrays and Electronic Tongue Systems

    Directory of Open Access Journals (Sweden)

    Manel del Valle

    2012-01-01

    Full Text Available This paper describes recent work performed with electronic tongue systems utilizing electrochemical sensors. The electronic tongues concept is a new trend in sensors that uses arrays of sensors together with chemometric tools to unravel the complex information generated. Initial contributions and also the most used variant employ conventional ion selective electrodes, in which it is named potentiometric electronic tongue. The second important variant is the one that employs voltammetry for its operation. As chemometric processing tool, the use of artificial neural networks as the preferred data processing variant will be described. The use of the sensor arrays inserted in flow injection or sequential injection systems will exemplify attempts made to automate the operation of electronic tongues. Significant use of biosensors, mainly enzyme-based, to form what is already named bioelectronic tongue will be also presented. Application examples will be illustrated with selected study cases from the Sensors and Biosensors Group at the Autonomous University of Barcelona.

  17. The design of red-blue 3D video fusion system based on DM642

    Science.gov (United States)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  18. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    Science.gov (United States)

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  19. Smart sensor systems for human health breath monitoring applications.

    Science.gov (United States)

    Hunter, G W; Xu, J C; Biaggi-Labiosa, A M; Laskowski, D; Dutta, P K; Mondal, S P; Ward, B J; Makel, D B; Liu, C C; Chang, C W; Dweik, R A

    2011-09-01

    Breath analysis techniques offer a potential revolution in health care diagnostics, especially if these techniques can be brought into standard use in the clinic and at home. The advent of microsensors combined with smart sensor system technology enables a new generation of sensor systems with significantly enhanced capabilities and minimal size, weight and power consumption. This paper discusses the microsensor/smart sensor system approach and provides a summary of efforts to migrate this technology into human health breath monitoring applications. First, the basic capability of this approach to measure exhaled breath associated with exercise physiology is demonstrated. Building from this foundation, the development of a system for a portable asthma home health care system is described. A solid-state nitric oxide (NO) sensor for asthma monitoring has been identified, and efforts are underway to miniaturize this NO sensor technology and integrate it into a smart sensor system. It is concluded that base platform microsensor technology combined with smart sensor systems can address the needs of a range of breath monitoring applications and enable new capabilities for healthcare.

  20. Optimal Sensor Selection for Health Monitoring Systems

    Science.gov (United States)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  1. Alcohol Control: Mobile Sensor System and Numerical Signal Analysis

    Directory of Open Access Journals (Sweden)

    Rolf SEIFERT

    2016-10-01

    Full Text Available An innovative mobile sensor system for alcohol control in the respiratory air is introduced. The gas sensor included in the sensor system is thermo-cyclically operated. Ethanol is the leading component in this context. However, other components occur in the breathing air which can influence the concentration determination of ethanol. Therefore, mono- ethanol samples and binary gas mixtures are measured by the sensor system and analyzed with a new calibration and evaluation procedure which is also incorporated in the system. The applications demonstrate a good substance identification capability of the sensor system and a very good concentration determination of the components.

  2. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    Science.gov (United States)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  3. Vertebrate gravity sensors as dynamic systems

    Science.gov (United States)

    Ross, M. D.

    1985-01-01

    This paper considers verterbrate gravity receptors as dynamic sensors. That is, it is hypothesized that gravity is a constant force to which an acceleration-sensing system would readily adapt. Premises are considered in light of the presence of kinocilia on hair cells of vertebrate gravity sensors; differences in loading of the sensors among species; and of possible reduction in loading by inclusion of much organic material in otoconia. Moreover, organic-inorganic interfaces may confer a piezoelectric property upon otoconia, which increase the sensitivity of the sensory system to small accelerations. Comparisons with man-made accelerometers are briefly taken up.

  4. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  5. Localization of cask and plug remote handling system in ITER using multiple video cameras

    International Nuclear Information System (INIS)

    Ferreira, João; Vale, Alberto; Ribeiro, Isabel

    2013-01-01

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building

  6. Sensor Selection method for IoT systems – focusing on embedded system requirements

    Directory of Open Access Journals (Sweden)

    Hirayama Masayuki

    2016-01-01

    Full Text Available Recently, various types of sensors have been developed. Using these sensors, IoT systems have become hot topics in embedded system domain. However, sensor selections for embedded systems are not well discussed up to now. This paper focuses on embedded system’s features and architecture, and proposes a sensor selection method which is composed seven steps. In addition, we applied the proposed method to a simple example – a sensor selection for computer scored answer sheet reader unit. From this case study, an idea to use FTA in sensor selection is also discussed.

  7. Research of real-time video processing system based on 6678 multi-core DSP

    Science.gov (United States)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  8. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    Science.gov (United States)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  9. Identification, synchronisation and composition of user-generated videos

    OpenAIRE

    Bano, Sophia

    2016-01-01

    Cotutela Universitat Politècnica de Catalunya i Queen Mary University of London The increasing availability of smartphones is facilitating people to capture videos of their experience when attending events such as concerts, sports competitions and public rallies. Smartphones are equipped with inertial sensors which could be beneficial for event understanding. The captured User-Generated Videos (UGVs) are made available on media sharing websites. Searching and mining of UGVs of the same eve...

  10. Engineering workstation: Sensor modeling

    Science.gov (United States)

    Pavel, M; Sweet, B.

    1993-01-01

    The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.

  11. Next generation sensors and systems

    CERN Document Server

    2016-01-01

    Written by experts in their area of research, this book has outlined the current status of the fundamentals and analytical concepts, modelling and design issues, technical details and practical applications of different types of sensors and discussed about the trends of next generation of sensors and systems happening in the area of Sensing technology. This book will be useful as a reference book for engineers and scientist especially the post-graduate students find will this book as reference book for their research on wearable sensors, devices and technologies.  .

  12. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Directory of Open Access Journals (Sweden)

    Chen Homer H

    2007-01-01

    Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  13. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Science.gov (United States)

    Lu, Meng-Ting; Yao, Jason J.; Chen, Homer H.

    2007-12-01

    The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  14. [Telemedicine with digital video transport system].

    Science.gov (United States)

    Hahm, Joon Soo; Shimizu, Shuji; Nakashima, Naoki; Byun, Tae Jun; Lee, Hang Lak; Choi, Ho Soon; Ko, Yong; Lee, Kyeong Geun; Kim, Sun Il; Kim, Tae Eun; Yun, Jiwon; Park, Yong Jin

    2004-06-01

    The growth of technology based on internet protocol has affected on the informatics and automatic controls of medical fields. The aim of this study was to establish the telemedical educational system by developing the high quality image transfer using the DVTS (digital video transmission system) on the high-speed internet network. Using telemedicine, we were able to send surgical images not only to domestic areas but also to international area. Moreover, we could discuss the condition of surgical procedures in the operation room and seminar room. The Korean-Japan cable network (KJCN) was structured in the submarine between Busan and Fukuoka. On the other hand, the Korea advanced research network (KOREN) was used to connect between Busan and Seoul. To link the image between the Hanyang University Hospital in Seoul and Kyushu University Hospital in Japan, we started teleconference system and recorded image-streaming system with DVTS on the circumstance with IPv4 network. Two operative cases were transmitted successfully. We could keep enough bandwidth of 60 Mbps for two-line transmission. The quality of transmitted moving image had no frame loss with the rate 30 per second. The sound was also clear and the time delay was less than 0.3 sec. Our study has demonstrated the feasibility of domestic and international telemedicine. We have established an international medical network with high-quality video transmission over internet protocol. It is easy to perform, reliable, and also economical. Thus, it will be a promising tool in remote medicine for worldwide telemedical communication in the future.

  15. Smart Sensor Network System For Environment Monitoring

    Directory of Open Access Journals (Sweden)

    Javed Ali Baloch

    2012-07-01

    Full Text Available SSN (Smart Sensor Network systems could be used to monitor buildings with modern infrastructure, plant sites with chemical pollution, horticulture, natural habitat, wastewater management and modern transport system. To sense attributes of phenomena and make decisions on the basis of the sensed value is the primary goal of such systems. In this paper a Smart Spatially aware sensor system is presented. A smart system, which could continuously monitor the network to observe the functionality and trigger, alerts to the base station if a change in the system occurs and provide feedback periodically, on demand or even continuously depending on the nature of the application. The results of the simulation trials presented in this paper exhibit the performance of a Smart Spatially Aware Sensor Networks.

  16. Video-Guidance Design for the DART Rendezvous Mission

    Science.gov (United States)

    Ruth, Michael; Tracy, Chisholm

    2004-01-01

    NASA's Demonstration of Autonomous Rendezvous Technology (DART) mission will validate a number of different guidance technologies, including state-differenced GPS transfers and close-approach video guidance. The video guidance for DART will employ NASA/Marshall s Advanced Video Guidance Sensor (AVGS). This paper focuses on the terminal phase of the DART mission that includes close-approach maneuvers under AVGS guidance. The closed-loop video guidance design for DART is driven by a number of competing requirements, including a need for maximizing tracking bandwidths while coping with measurement noise and the need to minimize RCS firings. A range of different strategies for attitude control and docking guidance have been considered for the DART mission, and design decisions are driven by a goal of minimizing both the design complexity and the effects of video guidance lags. The DART design employs an indirect docking approach, in which the guidance position targets are defined using relative attitude information. Flight simulation results have proven the effectiveness of the video guidance design.

  17. Optimization of wireless Bluetooth sensor systems.

    Science.gov (United States)

    Lonnblad, J; Castano, J; Ekstrom, M; Linden, M; Backlund, Y

    2004-01-01

    Within this study, three different Bluetooth sensor systems, replacing cables for transmission of biomedical sensor data, have been designed and evaluated. The three sensor architectures are built on 1-, 2- and 3-chip solutions and depending on the monitoring situation and signal character, different solutions are optimal. Essential parameters for all systems have been low physical weight and small size, resistance to interference and interoperability with other technologies as global- or local networks, PC's and mobile phones. Two different biomedical input signals, ECG and PPG (photoplethysmography), have been used to evaluate the three solutions. The study shows that it is possibly to continuously transmit an analogue signal. At low sampling rates and slowly varying parameters, as monitoring the heart rate with PPG, the 1-chip solution is the most suitable, offering low power consumption and thus a longer battery lifetime or a smaller battery, minimizing the weight of the sensor system. On the other hand, when a higher sampling rate is required, as an ECG, the 3-chip architecture, with a FPGA or micro-controller, offers the best solution and performance. Our conclusion is that Bluetooth might be useful in replacing cables of medical monitoring systems.

  18. Ultrasonic sensors in urban traffic driving-aid systems.

    Science.gov (United States)

    Alonso, Luciano; Milanés, Vicente; Torre-Ferrero, Carlos; Godoy, Jorge; Oria, Juan P; de Pedro, Teresa

    2011-01-01

    Currently, vehicles are often equipped with active safety systems to reduce the risk of accidents, most of which occur in urban environments. The most prominent include Antilock Braking Systems (ABS), Traction Control and Stability Control. All these systems use different kinds of sensors to constantly monitor the conditions of the vehicle, and act in an emergency. In this paper the use of ultrasonic sensors in active safety systems for urban traffic is proposed, and the advantages and disadvantages when compared to other sensors are discussed. Adaptive Cruise Control (ACC) for urban traffic based on ultrasounds is presented as an application example. The proposed system has been implemented in a fully-automated prototype vehicle and has been tested under real traffic conditions. The results confirm the good performance of ultrasonic sensors in these systems.

  19. Secure Video Surveillance System (SVSS) for unannounced safeguards inspections

    International Nuclear Information System (INIS)

    Galdoz, Erwin G.; Pinkalla, Mark

    2010-01-01

    The Secure Video Surveillance System (SVSS) is a collaborative effort between the U.S. Department of Energy (DOE), Sandia National Laboratories (SNL), and the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC). The joint project addresses specific requirements of redundant surveillance systems installed in two South American nuclear facilities as a tool to support unannounced inspections conducted by ABACC and the International Atomic Energy Agency (IAEA). The surveillance covers the critical time (as much as a few hours) between the notification of an inspection and the access of inspectors to the location in facility where surveillance equipment is installed. ABACC and the IAEA currently use the EURATOM Multiple Optical Surveillance System (EMOSS). This outdated system is no longer available or supported by the manufacturer. The current EMOSS system has met the project objective; however, the lack of available replacement parts and system support has made this system unsustainable and has increased the risk of an inoperable system. A new system that utilizes current technology and is maintainable is required to replace the aging EMOSS system. ABACC intends to replace one of the existing ABACC EMOSS systems by the Secure Video Surveillance System. SVSS utilizes commercial off-the shelf (COTS) technologies for all individual components. Sandia National Laboratories supported the system design for SVSS to meet Safeguards requirements, i.e. tamper indication, data authentication, etc. The SVSS consists of two video surveillance cameras linked securely to a data collection unit. The collection unit is capable of retaining historical surveillance data for at least three hours with picture intervals as short as 1sec. Images in .jpg format are available to inspectors using various software review tools. SNL has delivered two SVSS systems for test and evaluation at the ABACC Safeguards Laboratory. An additional 'proto-type' system remains

  20. MicroSensors Systems: detection of a dismounted threat

    Science.gov (United States)

    Davis, Bill; Berglund, Victor; Falkofske, Dwight; Krantz, Brian

    2005-05-01

    The Micro Sensor System (MSS) is a layered sensor network with the goal of detecting dismounted threats approaching high value assets. A low power unattended ground sensor network is dependant on a network protocol for efficiency in order to minimize data transmissions after network establishment. The reduction of network 'chattiness' is a primary driver for minimizing power consumption and is a factor in establishing a low probability of detection and interception. The MSS has developed a unique protocol to meet these challenges. Unattended ground sensor systems are most likely dependant on batteries for power which due to size determines the ability of the sensor to be concealed after placement. To minimize power requirements, overcome size limitations, and maintain a low system cost the MSS utilizes advanced manufacturing processes know as Fluidic Self-Assembly and Chip Scale Packaging. The type of sensing element and the ability to sense various phenomenologies (particularly magnetic) at ranges greater than a few meters limits the effectiveness of a system. The MicroSensor System will overcome these limitations by deploying large numbers of low cost sensors, which is made possible by the advanced manufacturing process used in production of the sensors. The MSS program will provide unprecedented levels of real-time battlefield information which greatly enhances combat situational awareness when integrated with the existing Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) infrastructure. This system will provide an important boost to realizing the information dominant, network-centric objective of Joint Vision 2020.

  1. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications.

    Science.gov (United States)

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Nonomura, Yutaka; Muroyama, Masanori

    2017-08-28

    Robot tactile sensation can enhance human-robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as "sensor platform LSI") as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated.

  2. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors

    Science.gov (United States)

    Sakurai, Yoshihisa; Fujita, Zenya; Ishige, Yusuke

    2016-01-01

    This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier’s subtechniques in course conditions. PMID:27049388

  3. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Yoshihisa Sakurai

    2016-04-01

    Full Text Available This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier’s subtechniques in course conditions.

  4. Intelligent Wireless Sensor Networks for System Health Monitoring

    Science.gov (United States)

    Alena, Rick

    2011-01-01

    Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network (PAN) standard are finding increasing use in the home automation and emerging smart energy markets. The network and application layers, based on the ZigBee 2007 Standard, provide a convenient framework for component-based software that supports customer solutions from multiple vendors. WSNs provide the inherent fault tolerance required for aerospace applications. The Discovery and Systems Health Group at NASA Ames Research Center has been developing WSN technology for use aboard aircraft and spacecraft for System Health Monitoring of structures and life support systems using funding from the NASA Engineering and Safety Center and Exploration Technology Development and Demonstration Program. This technology provides key advantages for low-power, low-cost ancillary sensing systems particularly across pressure interfaces and in areas where it is difficult to run wires. Intelligence for sensor networks could be defined as the capability of forming dynamic sensor networks, allowing high-level application software to identify and address any sensor that joined the network without the use of any centralized database defining the sensors characteristics. The IEEE 1451 Standard defines methods for the management of intelligent sensor systems and the IEEE 1451.4 section defines Transducer Electronic Datasheets (TEDS), which contain key information regarding the sensor characteristics such as name, description, serial number, calibration information and user information such as location within a vehicle. By locating the TEDS information on the wireless sensor itself and enabling access to this information base from the application software, the application can identify the sensor unambiguously and interpret and present the sensor data stream without reference to any other information. The application software is able to read the status of each sensor module, responding in real-time to changes of

  5. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  6. Semiautonomous Avionics-and-Sensors System for a UAV

    Science.gov (United States)

    Shams, Qamar

    2006-01-01

    Unmanned Aerial Vehicles (UAVs) autonomous or remotely controlled pilotless aircraft have been recently thrust into the spotlight for military applications, for homeland security, and as test beds for research. In addition to these functions, there are many space applications in which lightweight, inexpensive, small UAVS can be used e.g., to determine the chemical composition and other qualities of the atmospheres of remote planets. Moreover, on Earth, such UAVs can be used to obtain information about weather in various regions; in particular, they can be used to analyze wide-band acoustic signals to aid in determining the complex dynamics of movement of hurricanes. The Advanced Sensors and Electronics group at Langley Research Center has developed an inexpensive, small, integrated avionics-and-sensors system to be installed in a UAV that serves two purposes. The first purpose is to provide flight data to an AI (Artificial Intelligence) controller as part of an autonomous flight-control system. The second purpose is to store data from a subsystem of distributed MEMS (microelectromechanical systems) sensors. Examples of these MEMS sensors include humidity, temperature, and acoustic sensors, plus chemical sensors for detecting various vapors and other gases in the environment. The critical sensors used for flight control are a differential- pressure sensor that is part of an apparatus for determining airspeed, an absolute-pressure sensor for determining altitude, three orthogonal accelerometers for determining tilt and acceleration, and three orthogonal angular-rate detectors (gyroscopes). By using these eight sensors, it is possible to determine the orientation, height, speed, and rates of roll, pitch, and yaw of the UAV. This avionics-and-sensors system is shown in the figure. During the last few years, there has been rapid growth and advancement in the technological disciplines of MEMS, of onboard artificial-intelligence systems, and of smaller, faster, and

  7. Sensor data fusion for automated threat recognition in manned-unmanned infantry platoons

    Science.gov (United States)

    Wildt, J.; Varela, M.; Ulmke, M.; Brüggermann, B.

    2017-05-01

    To support a dismounted infantry platoon during deployment we team it with several unmanned aerial and ground vehicles (UAV and UGV, respectively). The unmanned systems integrate seamlessly into the infantry platoon, providing automated reconnaissance during movement while keeping formation as well as conducting close range reconnaissance during halt. The sensor data each unmanned system provides is continuously analyzed in real time by specialized algorithms, detecting humans in live videos of UAV mounted infrared cameras as well as gunshot detection and bearing by acoustic sensors. All recognized threats are fused into a consistent situational picture in real time, available to platoon and squad leaders as well as higher level command and control (C2) systems. This gives friendly forces local information superiority and increased situational awareness without the need to constantly monitor the unmanned systems and sensor data.

  8. Alcohol control: Mobile sensor system and numerical signal analysis

    OpenAIRE

    Seifert, Rolf; Keller, Hubert B.; Conrad, Thorsten; Peter, Jens

    2016-01-01

    An innovative mobile sensor system for alcohol control in the respiratory air is introduced. The gas sensor included in the sensor system is thermo-cyclically operated. Ethanol is the leading component in this context. However, other components occur in the breathing air which can influence the concentration determination of ethanol. Therefore, mono- ethanol samples and binary gas mixtures are measured by the sensor system and analyzed with a new calibration and evaluation procedure which is ...

  9. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  10. Wireless Sensor Network Metrics for Real-Time Systems

    Science.gov (United States)

    2009-05-20

    Wireless Sensor Network Metrics for Real-Time Systems Phoebus Wei-Chih Chen Electrical Engineering and Computer Sciences University of California at...3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Wireless Sensor Network Metrics for Real-Time Systems 5a. CONTRACT NUMBER 5b... wireless sensor networks (WSNs) is moving from studies of WSNs in isolation toward studies where the WSN is treated as a component of a larger system

  11. Ultrasonic Sensors in Urban Traffic Driving-Aid Systems

    Directory of Open Access Journals (Sweden)

    Teresa de Pedro

    2011-01-01

    Full Text Available Currently, vehicles are often equipped with active safety systems to reduce the risk of accidents, most of which occur in urban environments. The most prominent include Antilock Braking Systems (ABS, Traction Control and Stability Control. All these systems use different kinds of sensors to constantly monitor the conditions of the vehicle, and act in an emergency. In this paper the use of ultrasonic sensors in active safety systems for urban traffic is proposed, and the advantages and disadvantages when compared to other sensors are discussed. Adaptive Cruise Control (ACC for urban traffic based on ultrasounds is presented as an application example. The proposed system has been implemented in a fully-automated prototype vehicle and has been tested under real traffic conditions. The results confirm the good performance of ultrasonic sensors in these systems.

  12. Ultra-Low Power Sensor System for Disaster Event Detection in Metro Tunnel Systems

    Directory of Open Access Journals (Sweden)

    Jonah VINCKE

    2017-05-01

    Full Text Available In this extended paper, the concept for an ultra-low power wireless sensor network (WSN for underground tunnel systems is presented highlighting the chosen sensors. Its objectives are the detection of emergency events either from natural disasters, such as flooding or fire, or from terrorist attacks using explosives. Earlier works have demonstrated that the power consumption for the communication can be reduced such that the data acquisition (i.e. sensor sub-system becomes the most significant energy consumer. By using ultra-low power components for the smoke detector, a hydrostatic pressure sensor for water ingress detection and a passive acoustic emission sensor for explosion detection, all considered threats are covered while the energy consumption can be kept very low in relation to the data acquisition. In addition to 1 the sensor system is integrated into a sensor board. The total average power consumption for operating the sensor sub-system is measured to be 35.9 µW for lower and 7.8 µW for upper nodes.

  13. Sensor-based material tagging system

    International Nuclear Information System (INIS)

    Vercellotti, L.C.; Cox, R.W.; Ravas, R.J.; Schlotterer, J.C.

    1991-01-01

    Electronic identification tags are being developed for tracking material and personnel. In applying electronic identification tags to radioactive materials safeguards, it is important to measure attributes of the material to ensure that the tag remains with the material. The addition of a microcontroller with an on-board analog-to-digital converter to an electronic identification tag application-specific integrated-circuit has been demonstrated as means to provide the tag with sensor data. Each tag is assembled into a housing, which serves as a scale for measuring the weight of a paint-can-sized container and its contents. Temperature rise of the can above ambient is also measured, and a piezoelectric detector detects disturbances and immediately puts the tag into its alarm and beacon mode. Radiation measurement was also considered, but the background from nearby containers was found to be excessive. The sensor-based tagging system allows tracking of the material in cans as it is stored in vaults or is moved through the manufacturing process. The paper presents details of the sensor-based material tagging system and describes a demonstration system

  14. Circuits and Systems for Low-Power Miniaturized Wireless Sensors

    Science.gov (United States)

    Nagaraju, Manohar

    The field of electronic sensors has witnessed a tremendous growth over the last decade particularly with the proliferation of mobile devices. New applications in Internet of Things (IoT), wearable technology, are further expected to fuel the demand for sensors from current numbers in the range of billions to trillions in the next decade. The main challenges for a trillion sensors are continued miniaturization, low-cost and large-scale manufacturing process, and low power consumption. Traditional integration and circuit design techniques in sensor systems are not suitable for applications in smart dust, IoT etc. The first part of this thesis demonstrates an example sensor system for biosignal recording and illustrates the tradeoffs in the design of low-power miniaturized sensors. The different components of the sensor system are integrated at the board level. The second part of the thesis demonstrates fully integrated sensors that enable extreme miniaturization of a sensing system with the sensor element, processing circuitry, a frequency reference for communication and the communication circuitry in a single hermetically sealed die. Design techniques to reduce the power consumption of the sensor interface circuitry at the architecture and circuit level are demonstrated. The principles are used to design sensors for two of the most common physical variables, mass and pressure. A low-power wireless mass and pressure sensor suitable for a wide variety of biological/chemical sensing applications and Tire Pressure Monitoring Systems (TPMS) respectively are demonstrated. Further, the idea of using high-Q resonators for a Voltage Controlled Oscillator (VCO) is proposed and a low-noise, wide bandwidth FBAR-based VCO is presented.

  15. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    Science.gov (United States)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  16. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  17. Heimdall System for MSSS Sensor Tasking

    Science.gov (United States)

    Herz, A.; Jones, B.; Herz, E.; George, D.; Axelrad, P.; Gehly, S.

    In Norse Mythology, Heimdall uses his foreknowledge and keen eyesight to keep watch for disaster from his home near the Rainbow Bridge. Orbit Logic and the Colorado Center for Astrodynamics Research (CCAR) at the University of Colorado (CU) have developed the Heimdall System to schedule observations of known and uncharacterized objects and search for new objects from the Maui Space Surveillance Site. Heimdall addresses the current need for automated and optimized SSA sensor tasking driven by factors associated with improved space object catalog maintenance. Orbit Logic and CU developed an initial baseline prototype SSA sensor tasking capability for select sensors at the Maui Space Surveillance Site (MSSS) using STK and STK Scheduler, and then added a new Track Prioritization Component for FiSST-inspired computations for predicted Information Gain and Probability of Detection, and a new SSA-specific Figure-of-Merit (FOM) for optimized SSA sensor tasking. While the baseline prototype addresses automation and some of the multi-sensor tasking optimization, the SSA-improved prototype addresses all of the key elements required for improved tasking leading to enhanced object catalog maintenance. The Heimdall proof-of-concept was demonstrated for MSSS SSA sensor tasking for a 24 hour period to attempt observations of all operational satellites in the unclassified NORAD catalog, observe a small set of high priority GEO targets every 30 minutes, make a sky survey of the GEO belt region accessible to MSSS sensors, and observe particular GEO regions that have a high probability of finding new objects with any excess sensor time. This Heimdall prototype software paves the way for further R&D that will integrate this technology into the MSSS systems for operational scheduling, improve the software's scalability, and further tune and enhance schedule optimization. The Heimdall software for SSA sensor tasking provides greatly improved performance over manual tasking, improved

  18. Developing Agent-Oriented Video Surveillance System through Agent-Oriented Methodology (AOM

    Directory of Open Access Journals (Sweden)

    Cheah Wai Shiang

    2016-12-01

    Full Text Available Agent-oriented methodology (AOM is a comprehensive and unified agent methodology for agent-oriented software development. Although AOM is claimed to be able to cope with a complex system development, it is still not yet determined up to what extent this may be true. Therefore, it is vital to conduct an investigation to validate this methodology. This paper presents the adoption of AOM in developing an agent-oriented video surveillance system (VSS. An intruder handling scenario is designed and implemented through AOM. AOM provides an alternative method to engineer a distributed security system in a systematic manner. It presents the security system at a holistic view; provides a better conceptualization of agent-oriented security system and supports rapid prototyping as well as simulation of video surveillance system.

  19. A Client-Server System for Ubiquitous Video Service

    Directory of Open Access Journals (Sweden)

    Ronit Nossenson

    2012-12-01

    Full Text Available In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users.

  20. SENSORS FAULT DIAGNOSIS ALGORITHM DESIGN OF A HYDRAULIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Matej ORAVEC

    2017-06-01

    Full Text Available This article presents the sensors fault diagnosis system design for the hydraulic system, which is based on the group of the three fault estimation filters. These filters are used for estimation of the system states and sensors fault magnitude. Also, this article briefly stated the hydraulic system state control design with integrator, which is important assumption for the fault diagnosis system design. The sensors fault diagnosis system is implemented into the Matlab/Simulink environment and it is verified using the controlled hydraulic system simulation model. Verification of the designed fault diagnosis system is realized by series of experiments, which simulates sensors faults. The results of the experiments are briefly presented in the last part of this article.

  1. [The Development of Information Centralization and Management Integration System for Monitors Based on Wireless Sensor Network].

    Science.gov (United States)

    Xu, Xiu; Zhang, Honglei; Li, Yiming; Li, Bin

    2015-07-01

    Developed the information centralization and management integration system for monitors of different brands and models with wireless sensor network technologies such as wireless location and wireless communication, based on the existing wireless network. With adaptive implementation and low cost, the system which possesses the advantages of real-time, efficiency and elaboration is able to collect status and data of the monitors, locate the monitors, and provide services with web server, video server and locating server via local network. Using an intranet computer, the clinical and device management staffs can access the status and parameters of monitors. Applications of this system provide convenience and save human resource for clinical departments, as well as promote the efficiency, accuracy and elaboration for the device management. The successful achievement of this system provides solution for integrated and elaborated management of the mobile devices including ventilator and infusion pump.

  2. A System based on Adaptive Background Subtraction Approach for Moving Object Detection and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Bahadır KARASULU

    2013-04-01

    Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study

  3. AMA Conferences 2015. SENSOR 2015. 17th international conference on sensors and measurement technology. IRS2 2015. 14th international conference on infrared sensors and systems. Proceedings

    International Nuclear Information System (INIS)

    2015-01-01

    This meeting paper contains presentations of two conferences: SENSOR 2015 and IRS 2 (= International conference on InfraRed Sensors and systems). The first part of SENSOR 2015 contains the following chapters: (A) SENSOR PRINCIPLES: A.1: Mechanical sensors; A.2: Optical sensors; A.3: Ultrasonic sensors; A.4: Microacoustic sensors; A.5: Magnetic sensors; A.6: Impedance sensors; A.7: Gas sensors; A.8: Flow sensors; A.9: Dimensional measurement; A.10: Temperature and humidity sensors; A.11: Chemosensors; A.12: Biosensors; A.13: Embedded sensors; A.14: Sensor-actuator systems; (B) SENSOR TECHNOLOGY: B.1: Sensor design; B.2: Numerical simulation of sensors; B.3: Sensor materials; B.4: MEMS technology; B.5: Micro-Nano-Integration; B.6: Packaging; B.7: Materials; B.8: Thin films; B.9: Sensor production; B.10: Sensor reliability; B.11: Calibration and testing; B.12: Optical fibre sensors. (C) SENSOR ELECTRONICS AND COMMUNICATION: C.1: Sensor electronics; C.2: Sensor networks; C.3: Wireless sensors; C.4: Sensor communication; C.5: Energy harvesting; C.6: Measuring systems; C.7: Embedded systems; C.8: Self-monitoring and diagnosis; (D) APPLICATIONS: D.1: Medical measuring technology; D.2: Ambient assisted living; D.3: Process measuring technology; D.4: Automotive; D.5: Sensors in energy technology; D.6: Production technology; D.7: Security technology; D.8: Smart home; D.9: Household technology. The second part with the contributions of the IRS 2 2015 is structured as follows: (E) INFRARED SENSORS: E.1: Photon detectors; E.2: Thermal detectors; E.3: Cooled detectors; E.4: Uncooled detectors; E.5: Sensor modules; E.6: Sensor packaging. (G) INFRARED SYSTEMS AND APPLICATIONS: G.1: Thermal imaging; G.2: Pyrometry / contactless temperature measurement; G.3: Gas analysis; G.4: Spectroscopy; G.5: Motion control and presence detection; G.6: Security and safety monitoring; G.7: Non-destructive testing; F: INFRARED SYSTEM COMPONENTS: F.1: Infrared optics; F.2: Optical modulators; F.3

  4. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  5. Integrating soft sensor systems using conductive thread

    Science.gov (United States)

    Teng, Lijun; Jeronimo, Karina; Wei, Tianqi; Nemitz, Markus P.; Lyu, Geng; Stokes, Adam A.

    2018-05-01

    We are part of a growing community of researchers who are developing a new class of soft machines. By using mechanically soft materials (MPa modulus) we can design systems which overcome the bulk-mechanical mismatches between soft biological systems and hard engineered components. To develop fully integrated soft machines—which include power, communications, and control sub-systems—the research community requires methods for interconnecting between soft and hard electronics. Sensors based upon eutectic gallium alloys in microfluidic channels can be used to measure normal and strain forces, but integrating these sensors into systems of heterogeneous Young’s modulus is difficult due the complexity of finding a material which is electrically conductive, mechanically flexible, and stable over prolonged periods of time. Many existing gallium-based liquid alloy sensors are not mechanically or electrically robust, and have poor stability over time. We present the design and fabrication of a high-resolution pressure-sensor soft system that can transduce normal force into a digital output. In this soft system, which is built on a monolithic silicone substrate, a galinstan-based microfluidic pressure sensor is integrated with a flexible printed circuit board. We used conductive thread as the interconnect and found that this method alleviates problems arising due to the mechanical mismatch between conventional metal wires and soft or liquid materials. Conductive thread is low-cost, it is readily wetted by the liquid metal, it produces little bending moment into the microfluidic channel, and it can be connected directly onto the copper bond-pads of the flexible printed circuit board. We built a bridge-system to provide stable readings from the galinstan pressure sensor. This system gives linear measurement results between 500-3500 Pa of applied pressure. We anticipate that integrated systems of this type will find utility in soft-robotic systems as used for wearable

  6. Muscular condition monitoring system using fiber bragg grating sensors

    International Nuclear Information System (INIS)

    Kim, Heon Young; Lee, Jin Hyuk; Kim, Dae Hyun

    2014-01-01

    Fiber optic sensors (FOS) have advantages such as electromagnetic interference (EMI) immunity, corrosion resistance and multiplexing capability. For these reasons, they are widely used in various condition monitoring systems (CMS). This study investigated a muscular condition monitoring system using fiber optic sensors (FOS). Generally, sensors for monitoring the condition of the human body are based on electro-magnetic devices. However, such an electrical system has several weaknesses, including the potential for electro-magnetic interference and distortion. Fiber Bragg grating (FBG) sensors overcome these weaknesses, along with simplifying the devices and increasing user convenience. To measure the level of muscle contraction and relaxation, which indicates the muscle condition, a belt-shaped FBG sensor module that makes it possible to monitor the movement of muscles in the radial and circumferential directions was fabricated in this study. In addition, a uniaxial tensile test was carried out in order to evaluate the applicability of this FBG sensor module. Based on the experimental results, a relationship was observed between the tensile stress and Bragg wavelength of the FBG sensors, which revealed the possibility of fabricating a muscular condition monitoring system based on FBG sensors.

  7. Muscular condition monitoring system using fiber bragg grating sensors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Heon Young; Lee, Jin Hyuk; Kim, Dae Hyun [Seoul National University of Technology, Seoul (Korea, Republic of)

    2014-10-15

    Fiber optic sensors (FOS) have advantages such as electromagnetic interference (EMI) immunity, corrosion resistance and multiplexing capability. For these reasons, they are widely used in various condition monitoring systems (CMS). This study investigated a muscular condition monitoring system using fiber optic sensors (FOS). Generally, sensors for monitoring the condition of the human body are based on electro-magnetic devices. However, such an electrical system has several weaknesses, including the potential for electro-magnetic interference and distortion. Fiber Bragg grating (FBG) sensors overcome these weaknesses, along with simplifying the devices and increasing user convenience. To measure the level of muscle contraction and relaxation, which indicates the muscle condition, a belt-shaped FBG sensor module that makes it possible to monitor the movement of muscles in the radial and circumferential directions was fabricated in this study. In addition, a uniaxial tensile test was carried out in order to evaluate the applicability of this FBG sensor module. Based on the experimental results, a relationship was observed between the tensile stress and Bragg wavelength of the FBG sensors, which revealed the possibility of fabricating a muscular condition monitoring system based on FBG sensors.

  8. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications †

    Science.gov (United States)

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Muroyama, Masanori

    2017-01-01

    Robot tactile sensation can enhance human–robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as “sensor platform LSI”) as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated. PMID:29061954

  9. Closed-loop System Identification with New Sensors

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2008-01-01

    This paper deals with system identification of new system dynamics revealed by online introduction of new sensors in existing multi-variable linear control systems. The so-called "Hansen Scheme" utilises the dual Youla-Kucera parameterisation of all systems stabilised by a given linear controller...... to transform closed-loop system identification problems into open-loop-like problems. We show that this scheme can be formally extended to accomodate extra sensors in a nice way. The approach is illustrated on a simple simulation example....

  10. Energy storage management system with distributed wireless sensors

    Science.gov (United States)

    Farmer, Joseph C.; Bandhauer, Todd M.

    2015-12-08

    An energy storage system having a multiple different types of energy storage and conversion devices. Each device is equipped with one or more sensors and RFID tags to communicate sensor information wirelessly to a central electronic management system, which is used to control the operation of each device. Each device can have multiple RFID tags and sensor types. Several energy storage and conversion devices can be combined.

  11. Adaptive Intrusion Data System (AIDS)

    International Nuclear Information System (INIS)

    Corlis, N.E.

    1980-05-01

    The adaptive intrusion data system (AIDS) was developed to collect data from intrusion alarm sensors as part of an evaluation system to improve sensor performance. AIDS is a unique data system which uses computer controlled data systems, video cameras and recorders, analog-to-digital conversion, environmental sensors, and digital recorders to collect sensor data. The data can be viewed either manually or with a special computerized data-reduction system which adds new data to a data base stored on a magnetic disc recorder. This report provides a synoptic account of the AIDS as it presently exists. Modifications to the purchased subsystems are described, and references are made to publications which describe the Sandia-designed subsystems

  12. Breast Ultrasound Examination with Video Monitor System: A Satisfaction Survey among Patients

    International Nuclear Information System (INIS)

    Ryu, Jung Kyu; Kim, Hyun Cheol; Yang, Dal Mo

    2010-01-01

    The purpose of this study is to assess the patients satisfaction with a newly established video-monitor system and the associated basic items for performing breast ultrasound exams by conducting a survey among the patients. 349 patients were invited to take the survey and they had undergone breast ultrasound examination once during the 3 months after the monitor system has been introduced. The questionnaire was composed of 8 questions, 4 of which were about the basic items such as age, gender and the reason of their taking the breast ultrasound exam, their preference for the gender of the examiner and the desired length of time for the examination. The other 4 question were about their satisfaction with the video monitor. The patients were divided into two groups according to the purposes of taking the exams, which were screening or diagnostic purposes. The results were compared between these 2 groups. The satisfaction with the video monitor system was assessed by using a scoring system that ranged from 1 to 5. For the total patients, the screening group was composed of 124 patients and the diagnostic group was composed of 225. The reasons why the patients wanted to take the examinations in the diagnostic group varied. The questionnaire about the preference of the gender of the examiner showed that 81.5% in the screening group and 79.1% in the diagnostic group preferred a woman doctor. The required, suitable time for the breast ultrasound examination was 5 to 10 minutes or 10 to 15 minutes for about 70% of the patients. The mean satisfaction score for the video monitor system was as high as 3.95 point. The portion of patients in each group who answered over 3 points for their satisfaction with the monitor system was 88.7% and 94.2%, respectively. Our study showed that patients preferred 5-15 minutes for the length of the examination time and a female examiner. We also confirmed high patient satisfaction with the video monitor system

  13. Breast Ultrasound Examination with Video Monitor System: A Satisfaction Survey among Patients

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Jung Kyu; Kim, Hyun Cheol; Yang, Dal Mo [East-West Neo Medical Center, Kyung-Hee University, Seoul (Korea, Republic of)

    2010-03-15

    The purpose of this study is to assess the patients satisfaction with a newly established video-monitor system and the associated basic items for performing breast ultrasound exams by conducting a survey among the patients. 349 patients were invited to take the survey and they had undergone breast ultrasound examination once during the 3 months after the monitor system has been introduced. The questionnaire was composed of 8 questions, 4 of which were about the basic items such as age, gender and the reason of their taking the breast ultrasound exam, their preference for the gender of the examiner and the desired length of time for the examination. The other 4 question were about their satisfaction with the video monitor. The patients were divided into two groups according to the purposes of taking the exams, which were screening or diagnostic purposes. The results were compared between these 2 groups. The satisfaction with the video monitor system was assessed by using a scoring system that ranged from 1 to 5. For the total patients, the screening group was composed of 124 patients and the diagnostic group was composed of 225. The reasons why the patients wanted to take the examinations in the diagnostic group varied. The questionnaire about the preference of the gender of the examiner showed that 81.5% in the screening group and 79.1% in the diagnostic group preferred a woman doctor. The required, suitable time for the breast ultrasound examination was 5 to 10 minutes or 10 to 15 minutes for about 70% of the patients. The mean satisfaction score for the video monitor system was as high as 3.95 point. The portion of patients in each group who answered over 3 points for their satisfaction with the monitor system was 88.7% and 94.2%, respectively. Our study showed that patients preferred 5-15 minutes for the length of the examination time and a female examiner. We also confirmed high patient satisfaction with the video monitor system

  14. Multimodal Semantics Extraction from User-Generated Videos

    Directory of Open Access Journals (Sweden)

    Francesco Cricri

    2012-01-01

    Full Text Available User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium, genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.

  15. Mobile-Cloud Assisted Video Summarization Framework for Efficient Management of Remote Sensing Data Generated by Wireless Capsule Sensors

    Directory of Open Access Journals (Sweden)

    Irfan Mehmood

    2014-09-01

    Full Text Available Wireless capsule endoscopy (WCE has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.

  16. Mobile-cloud assisted video summarization framework for efficient management of remote sensing data generated by wireless capsule sensors.

    Science.gov (United States)

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-09-15

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.

  17. Mobile-Cloud Assisted Video Summarization Framework for Efficient Management of Remote Sensing Data Generated by Wireless Capsule Sensors

    Science.gov (United States)

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-01-01

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874

  18. The Video Collaborative Localization of a Miner's Lamp Based on Wireless Multimedia Sensor Networks for Underground Coal Mines.

    Science.gov (United States)

    You, Kaiming; Yang, Wei; Han, Ruisong

    2015-09-29

    Based on wireless multimedia sensor networks (WMSNs) deployed in an underground coal mine, a miner's lamp video collaborative localization algorithm was proposed to locate miners in the scene of insufficient illumination and bifurcated structures of underground tunnels. In bifurcation area, several camera nodes are deployed along the longitudinal direction of tunnels, forming a collaborative cluster in wireless way to monitor and locate miners in underground tunnels. Cap-lamps are regarded as the feature of miners in the scene of insufficient illumination of underground tunnels, which means that miners can be identified by detecting their cap-lamps. A miner's lamp will project mapping points on the imaging plane of collaborative cameras and the coordinates of mapping points are calculated by collaborative cameras. Then, multiple straight lines between the positions of collaborative cameras and their corresponding mapping points are established. To find the three-dimension (3D) coordinate location of the miner's lamp a least square method is proposed to get the optimal intersection of the multiple straight lines. Tests were carried out both in a corridor and a realistic scenario of underground tunnel, which show that the proposed miner's lamp video collaborative localization algorithm has good effectiveness, robustness and localization accuracy in real world conditions of underground tunnels.

  19. Applied learning-based color tone mapping for face recognition in video surveillance system

    Science.gov (United States)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  20. Virtual Video Prototyping of Pervasive Healthcare Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Bossen, Claus; Madsen, Kim Halskov

    2002-01-01

    Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...... issues, since one cannot avoid paying attention to the physical, real-world constraints and to details in the usage-interaction between users and technology. From the users' perspective, during our evaluation of the virtual video prototype, we experienced how it enabled users to relate...

  1. 75 FR 75186 - Interview Room Video System Standard Special Technical Committee Request for Proposals for...

    Science.gov (United States)

    2010-12-02

    ... DEPARTMENT OF JUSTICE Office of Justice Programs [OJP (NIJ) Docket No. 1534] Interview Room Video System Standard Special Technical Committee Request for Proposals for Certification and Testing Expertise... Interview Room Video System Standard and corresponding certification program requirements. This work is...

  2. On-line methanol sensor system development for recombinant ...

    African Journals Online (AJOL)

    On-line methanol sensor system development for recombinant human serum ... of the methanol sensor system was done in a medium environment with yeast cells ... induction at a low temperature and a pH where protease does not function.

  3. Reconfigurable Sensor Monitoring System

    Science.gov (United States)

    Alhorn, Dean C. (Inventor); Dutton, Kenneth R. (Inventor); Howard, David E. (Inventor); Smith, Dennis A. (Inventor)

    2017-01-01

    A reconfigurable sensor monitoring system includes software tunable filters, each of which is programmable to condition one type of analog signal. A processor coupled to the software tunable filters receives each type of analog signal so-conditioned.

  4. Progress in triboluminescence-based smart optical sensor system

    International Nuclear Information System (INIS)

    Olawale, David O.; Dickens, Tarik; Sullivan, William G.; Okoli, Okenwa I.; Sobanjo, John O.; Wang, Ben

    2011-01-01

    Extensive research work has been done in recent times to apply the triboluminescence (TL) phenomenon for damage detection in engineering structures. Of particular note are the various attempts to apply it in the detection of impact damages in composites and aerospace structures. This is because TL-based sensor systems have a great potential for wireless, in-situ and distributed (WID) structural health monitoring when fully developed. This review article highlights development and the current state-of-the-art in the application of TL-based sensor systems. The underlying mechanisms believed to be responsible for triboluminescence, particularly in zinc sulfide manganese, a highly triboluminescent material, are discussed. The challenges militating against the full exploitation and field application of TL sensor systems are also identified. Finally, viable solutions and approaches to address these challenges are enumerated. - Highlights: → The underlying mechanisms believed to be responsible for triboluminescence. → State-of-the-art in the development and application of TL-based sensor systems. → The challenges militating against the full exploitation and field application of TL sensor systems are identified. → Viable solutions and approaches to address these challenges are enumerated.

  5. Transparent Fingerprint Sensor System for Large Flat Panel Display.

    Science.gov (United States)

    Seo, Wonkuk; Pi, Jae-Eun; Cho, Sung Haeung; Kang, Seung-Youl; Ahn, Seong-Deok; Hwang, Chi-Sun; Jeon, Ho-Sik; Kim, Jong-Uk; Lee, Myunghee

    2018-01-19

    In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger's ridges and valleys through the fingerprint sensor array.

  6. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    Science.gov (United States)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  7. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Science.gov (United States)

    2010-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  8. An Environmental Monitoring System for Managing Spatiotemporal Sensor Data over Sensor Networks

    Directory of Open Access Journals (Sweden)

    Keun Ho Ryu

    2012-03-01

    Full Text Available In a wireless sensor network, sensors collect data about natural phenomena and transmit them to a server in real-time. Many studies have been conducted focusing on the processing of continuous queries in an approximate form. However, this approach is difficult to apply to environmental applications which require the correct data to be stored. In this paper, we propose a weather monitoring system for handling and storing the sensor data stream in real-time in order to support continuous spatial and/or temporal queries. In our system, we exploit two time-based insertion methods to store the sensor data stream and reduce the number of managed tuples, without losing any of the raw data which are useful for queries, by using the sensors’ temporal attributes. In addition, we offer a method for reducing the cost of the join operations used in processing spatiotemporal queries by filtering out a list of irrelevant sensors from query range before making a join operation. In the results of the performance evaluation, the number of tuples obtained from the data stream is reduced by about 30% in comparison to a naïve approach, thereby decreasing the query execution time.

  9. Third-generation imaging sensor system concepts

    Science.gov (United States)

    Reago, Donald A.; Horn, Stuart B.; Campbell, James, Jr.; Vollmerhausen, Richard H.

    1999-07-01

    Second generation forward looking infrared sensors, based on either parallel scanning, long wave (8 - 12 um) time delay and integration HgCdTe detectors or mid wave (3 - 5 um), medium format staring (640 X 480 pixels) InSb detectors, are being fielded. The science and technology community is now turning its attention toward the definition of a future third generation of FLIR sensors, based on emerging research and development efforts. Modeled third generation sensor performance demonstrates a significant improvement in performance over second generation, resulting in enhanced lethality and survivability on the future battlefield. In this paper we present the current thinking on what third generation sensors systems will be and the resulting requirements for third generation focal plane array detectors. Three classes of sensors have been identified. The high performance sensor will contain a megapixel or larger array with at least two colors. Higher operating temperatures will also be the goal here so that power and weight can be reduced. A high performance uncooled sensor is also envisioned that will perform somewhere between first and second generation cooled detectors, but at significantly lower cost, weight, and power. The final third generation sensor is a very low cost micro sensor. This sensor can open up a whole new IR market because of its small size, weight, and cost. Future unattended throwaway sensors, micro UAVs, and helmet mounted IR cameras will be the result of this new class.

  10. AMA Conferences 2015. SENSOR 2015. 17th international conference on sensors and measurement technology. IRS{sup 2} 2015. 14th international conference on infrared sensors and systems. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2015-07-01

    This meeting paper contains presentations of two conferences: SENSOR 2015 and IRS{sup 2} (= International conference on InfraRed Sensors and systems). The first part of SENSOR 2015 contains the following chapters: (A) SENSOR PRINCIPLES: A.1: Mechanical sensors; A.2: Optical sensors; A.3: Ultrasonic sensors; A.4: Microacoustic sensors; A.5: Magnetic sensors; A.6: Impedance sensors; A.7: Gas sensors; A.8: Flow sensors; A.9: Dimensional measurement; A.10: Temperature and humidity sensors; A.11: Chemosensors; A.12: Biosensors; A.13: Embedded sensors; A.14: Sensor-actuator systems; (B) SENSOR TECHNOLOGY: B.1: Sensor design; B.2: Numerical simulation of sensors; B.3: Sensor materials; B.4: MEMS technology; B.5: Micro-Nano-Integration; B.6: Packaging; B.7: Materials; B.8: Thin films; B.9: Sensor production; B.10: Sensor reliability; B.11: Calibration and testing; B.12: Optical fibre sensors. (C) SENSOR ELECTRONICS AND COMMUNICATION: C.1: Sensor electronics; C.2: Sensor networks; C.3: Wireless sensors; C.4: Sensor communication; C.5: Energy harvesting; C.6: Measuring systems; C.7: Embedded systems; C.8: Self-monitoring and diagnosis; (D) APPLICATIONS: D.1: Medical measuring technology; D.2: Ambient assisted living; D.3: Process measuring technology; D.4: Automotive; D.5: Sensors in energy technology; D.6: Production technology; D.7: Security technology; D.8: Smart home; D.9: Household technology. The second part with the contributions of the IRS{sup 2} 2015 is structured as follows: (E) INFRARED SENSORS: E.1: Photon detectors; E.2: Thermal detectors; E.3: Cooled detectors; E.4: Uncooled detectors; E.5: Sensor modules; E.6: Sensor packaging. (G) INFRARED SYSTEMS AND APPLICATIONS: G.1: Thermal imaging; G.2: Pyrometry / contactless temperature measurement; G.3: Gas analysis; G.4: Spectroscopy; G.5: Motion control and presence detection; G.6: Security and safety monitoring; G.7: Non-destructive testing; F: INFRARED SYSTEM COMPONENTS: F.1: Infrared optics; F.2: Optical

  11. Baited remote underwater video system (BRUVs) survey of ...

    African Journals Online (AJOL)

    This is the first baited remote underwater video system (BRUVs) survey of the relative abundance, diversity and seasonal distribution of chondrichthyans in False Bay. Nineteen species from 11 families were recorded across 185 sites at between 4 and 49 m depth. Diversity was greatest in summer, on reefs and in shallow ...

  12. Computational multispectral video imaging [Invited].

    Science.gov (United States)

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  13. Advanced interfacing techniques for sensors measurement circuits and systems for intelligent sensors

    CERN Document Server

    Roy, Joyanta; Kumar, V; Mukhopadhyay, Subhas

    2017-01-01

    This book presents ways of interfacing sensors to the digital world, and discusses the marriage between sensor systems and the IoT: the opportunities and challenges. As sensor output is often affected by noise and interference, the book presents effective schemes for recovering the data from a signal that is buried in noise. It also explores interesting applications in the area of health care, un-obstructive monitoring and the electronic nose and tongue. It is a valuable resource for engineers and scientists in the area of sensors and interfacing wanting to update their knowledge of the latest developments in the field and learn more about sensing applications and challenges.

  14. Implementation of an Optical Readout System for High-Sensitivity Terahertz Microelectromechanical Sensor Array

    Science.gov (United States)

    2014-09-01

    rod moves about the illumination scene, the pixels in the detector start to flicker . The ‘ flickering ’ effect is due to the metal rod blocking THz...still possible to mitigate convective heat exchange between the sensor and the ambient surroundings. To mitigate the effects of convective heat...detector start to flicker . The ‘ flickering ’ effect is due to the metal rod blocking THz radiation. This effect is more apparent in the video

  15. Water-Cut Sensor System

    KAUST Repository

    Karimi, Muhammad Akram

    2018-01-11

    Provided in some embodiments is a method of manufacturing a pipe conformable water-cut sensors system. Provided in some embodiments is method for manufacturing a water-cut sensor system that includes providing a helical T-resonator, a helical ground conductor, and a separator at an exterior of a cylindrical pipe. The helical T-resonator including a feed line, and a helical open shunt stub conductively coupled to the feed line. The helical ground conductor including a helical ground plane opposite the helical open shunt stub and a ground ring conductively coupled to the helical ground plane. The feed line overlapping at least a portion of the ground ring, and the separator disposed between the feed line and the portion of the ground ring overlapped by the feed line to electrically isolate the helical T-resonator from the helical ground conductor.

  16. Optical detection system for MEMS-type pressure sensor

    International Nuclear Information System (INIS)

    Sareło, K; Górecka-Drzazga, A; Dziuban, J A

    2015-01-01

    In this paper a special optical detection system designed for a MEMS-type (micro-electro-mechanical system) silicon pressure sensor is presented. The main part of the optical system—a detection unit with a perforated membrane—is bonded to the silicon sensor, and placed in a measuring system. An external light source illuminates the membrane of the pressure sensor. Owing to the light reflected from the deflected membrane sensor, the optical pattern consisting of light points is visible, and pressure can be estimated. The optical detection unit (20   ×   20   ×   20.4 mm 3 ) is fabricated using microengineering techniques. Its dimensions are adjusted to the dimensions of the pressure sensor (5   ×   5 mm 2 silicon membrane). Preliminary tests of the optical detection unit integrated with the silicon pressure sensor are carried out. For the membrane sensor from 15 to 60 µm thick, a repeatable detection of the differential pressure in the range of 0 to 280 kPa is achieved. The presented optical microsystem is especially suitable for the pressure measurements in a high radiation environment. (paper)

  17. Toward Sensor-Based Context Aware Systems

    Directory of Open Access Journals (Sweden)

    Kouhei Takada

    2012-01-01

    Full Text Available This paper proposes a methodology for sensor data interpretation that can combine sensor outputs with contexts represented as sets of annotated business rules. Sensor readings are interpreted to generate events labeled with the appropriate type and level of uncertainty. Then, the appropriate context is selected. Reconciliation of different uncertainty types is achieved by a simple technique that moves uncertainty from events to business rules by generating combs of standard Boolean predicates. Finally, context rules are evaluated together with the events to take a decision. The feasibility of our idea is demonstrated via a case study where a context-reasoning engine has been connected to simulated heartbeat sensors using prerecorded experimental data. We use sensor outputs to identify the proper context of operation of a system and trigger decision-making based on context information.

  18. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    Science.gov (United States)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  19. Three-dimensional (3-D) video systems: bi-channel or single-channel optics?

    Science.gov (United States)

    van Bergen, P; Kunert, W; Buess, G F

    1999-11-01

    This paper presents the results of a comparison between two different three-dimensional (3-D) video systems, one with single-channel optics, the other with bi-channel optics. The latter integrates two lens systems, each transferring one half of the stereoscopic image; the former uses only one lens system, similar to a two-dimensional (2-D) endoscope, which transfers the complete stereoscopic picture. In our training centre for minimally invasive surgery, surgeons were involved in basic and advanced laparoscopic courses using both a 2-D system and the two 3-D video systems. They completed analog scale questionnaires in order to record a subjective impression of the relative convenience of operating in 2-D and 3-D vision, and to identify perceived deficiencies in the 3-D system. As an objective test, different experimental tasks were developed, in order to measure performance times and to count pre-defined errors made while using the two 3-D video systems and the 2-D system. Using the bi-channel optical system, the surgeon has a heightened spatial perception, and can work faster and more safely than with a single-channel system. However, single-channel optics allow the use of an angulated endoscope, and the free rotation of the optics relative to the camera, which is necessary for some operative applications.

  20. A Simple FSPN Model of P2P Live Video Streaming System

    OpenAIRE

    Kotevski, Zoran; Mitrevski, Pece

    2011-01-01

    Peer to Peer (P2P) live streaming is relatively new paradigm that aims at streaming live video to large number of clients at low cost. Many such applications already exist in the market, but, prior to creating such system it is necessary to analyze its performance via representative model that can provide good insight in the system’s behavior. Modeling and performance analysis of P2P live video streaming systems is challenging task which requires addressing many properties and issues of P2P s...

  1. Bio-integrated electronics and sensor systems

    Science.gov (United States)

    Yeo, Woon-Hong; Webb, R. Chad; Lee, Woosik; Jung, Sungyoung; Rogers, John A.

    2013-05-01

    Skin-mounted epidermal electronics, a strategy for bio-integrated electronics, provide an avenue to non-invasive monitoring of clinically relevant physiological signals for healthcare applications. Current conventional systems consist of single-point sensors fastened to the skin with adhesives, and sometimes with conducting gels, which limits their use outside of clinical settings due to loss of adhesion and irritation to the user. In order to facilitate extended use of skin-mounted healthcare sensors without disrupting everyday life, we envision electronic monitoring systems that integrate seamlessly with the skin below the notice of the user. This manuscript reviews recent significant results towards our goal of wearable electronic sensor systems for long-term monitoring of physiological signals. Ultra-thin epidermal electronic systems (EES) are demonstrated for extended use on the skin, in a conformal manner, including during everyday bathing and sleeping activities. We describe the assessment of clinically relevant physiological parameters, such as electrocardiograms (ECG), electromyograms (EMG), electroencephalograms (EEG), temperature, mechanical strain and thermal conductivity, using examples of multifunctional EES devices. Additionally, we demonstrate capability for real life application of EES by monitoring the system functionality, which has no discernible change, during cyclic fatigue testing.

  2. Manageable and Extensible Video Streaming Systems for On-Line Monitoring of Remote Laboratory Experiments

    Directory of Open Access Journals (Sweden)

    Jian-Wei Lin

    2009-08-01

    Full Text Available To enable clients to view real-time video of the involved instruments during a remote experiment, two real-time video streaming systems are devised. One is for the remote experiments which instruments locate in one geographic spot and the other is for those which instruments scatter over different places. By means of running concurrent streaming processes at a server, multiple instruments can be monitored simultaneously by different clients. The proposed systems possess excellent extensibility, that is, the systems can easily add new digital cameras for instruments without modifying any software. Also they are well-manageable, meaning that an administrator can conveniently adjust the quality of the real-time video depending on system load and visual requirements. Finally, some evaluation concerning CPU utilization and bandwidth consumption of the systems have been evaluated to verify the effectiveness of the proposed solutions.

  3. Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion

    Science.gov (United States)

    Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger

    2014-09-01

    Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.

  4. Detection of Visual Events in Underwater Video Using a Neuromorphic Saliency-based Attention System

    Science.gov (United States)

    Edgington, D. R.; Walther, D.; Cline, D. E.; Sherlock, R.; Salamy, K. A.; Wilson, A.; Koch, C.

    2003-12-01

    The Monterey Bay Aquarium Research Institute (MBARI) uses high-resolution video equipment on remotely operated vehicles (ROV) to obtain quantitative data on the distribution and abundance of oceanic animals. High-quality video data supplants the traditional approach of assessing the kinds and numbers of animals in the oceanic water column through towing collection nets behind ships. Tow nets are limited in spatial resolution, and often destroy abundant gelatinous animals resulting in species undersampling. Video camera-based quantitative video transects (QVT) are taken through the ocean midwater, from 50m to 4000m, and provide high-resolution data at the scale of the individual animals and their natural aggregation patterns. However, the current manual method of analyzing QVT video by trained scientists is labor intensive and poses a serious limitation to the amount of information that can be analyzed from ROV dives. Presented here is an automated system for detecting marine animals (events) visible in the videos. Automated detection is difficult due to the low contrast of many translucent animals and due to debris ("marine snow") cluttering the scene. Video frames are processed with an artificial intelligence attention selection algorithm that has proven a robust means of target detection in a variety of natural terrestrial scenes. The candidate locations identified by the attention selection module are tracked across video frames using linear Kalman filters. Typically, the occurrence of visible animals in the video footage is sparse in space and time. A notion of "boring" video frames is developed by detecting whether or not there is an interesting candidate object for an animal present in a particular sequence of underwater video -- video frames that do not contain any "interesting" events. If objects can be tracked successfully over several frames, they are stored as potentially "interesting" events. Based on low-level properties, interesting events are

  5. Embedded Sensor Systems for Health - A Step Towards Personalized Health.

    Science.gov (United States)

    Lindén, Maria; Björkman, Mats

    2018-01-01

    The demography is changing towards older people, and the challenge to provide an appropriate care is well known. Sensor systems, combined with IT solutions are recognized as one of the major tools to handle this situation. Embedded Sensor Systems for Health (ESS-H) is a research profile at Mälardalen University in Sweden, focusing on embedded sensor systems for health technology applications. The research addresses several important issues: to provide sensor systems for health monitoring at home, to provide sensor systems for health monitoring at work, to provide safe and secure infrastructure and software testing methods for physiological data management. The user perspective is important in order to solve real problems and to develop systems that are easy and intuitive to use. One of the overall aims is to enable health trend monitoring in home environments, thus being able to detect early deterioration of a patient. Sensor systems, signal processing algorithms, and decision support algorithms have been developed. Work on development of safe and secure infrastructure and software testing methods are important for an embedded sensor system aimed for health monitoring, both in home and in work applications. Patient data must be sent and received in a safe and secure manner, also fulfilling the integrity criteria.

  6. Battery system with temperature sensors

    Science.gov (United States)

    Wood, Steven J.; Trester, Dale B.

    2012-11-13

    A battery system to monitor temperature includes at least one cell with a temperature sensing device proximate the at least one cell. The battery system also includes a flexible member that holds the temperature sensor proximate to the at least one cell.

  7. A Novel System for Supporting Autism Diagnosis Using Home Videos: Iterative Development and Evaluation of System Design.

    Science.gov (United States)

    Nazneen, Nazneen; Rozga, Agata; Smith, Christopher J; Oberleitner, Ron; Abowd, Gregory D; Arriaga, Rosa I

    2015-06-17

    Observing behavior in the natural environment is valuable to obtain an accurate and comprehensive assessment of a child's behavior, but in practice it is limited to in-clinic observation. Research shows significant time lag between when parents first become concerned and when the child is finally diagnosed with autism. This lag can delay early interventions that have been shown to improve developmental outcomes. To develop and evaluate the design of an asynchronous system that allows parents to easily collect clinically valid in-home videos of their child's behavior and supports diagnosticians in completing diagnostic assessment of autism. First, interviews were conducted with 11 clinicians and 6 families to solicit feedback from stakeholders about the system concept. Next, the system was iteratively designed, informed by experiences of families using it in a controlled home-like experimental setting and a participatory design process involving domain experts. Finally, in-field evaluation of the system design was conducted with 5 families of children (4 with previous autism diagnosis and 1 child typically developing) and 3 diagnosticians. For each family, 2 diagnosticians, blind to the child's previous diagnostic status, independently completed an autism diagnosis via our system. We compared the outcome of the assessment between the 2 diagnosticians, and between each diagnostician and the child's previous diagnostic status. The system that resulted through the iterative design process includes (1) NODA smartCapture, a mobile phone-based application for parents to record prescribed video evidence at home; and (2) NODA Connect, a Web portal for diagnosticians to direct in-home video collection, access developmental history, and conduct an assessment by linking evidence of behaviors tagged in the videos to the Diagnostic and Statistical Manual of Mental Disorders criteria. Applying clinical judgment, the diagnostician concludes a diagnostic outcome. During field

  8. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  9. Urinary incontinence monitoring system using laser-induced graphene sensors

    KAUST Repository

    Nag, Anindya

    2017-12-25

    This paper presents the design and development of a sensor patch to be used in a sensing system to deal with the urinary incontinence problem primarily faced by women and elderly people. The sensor patches were developed from laser-induced graphene from low-cost commercial polyimide (PI) polymers. The graphene was manually transferred to a commercial tape, which was used as sensor patch for experimentation. Salt solutions with different concentrations were tested to determine the most sensitive frequency region of the sensor. The results are encouraging to further develop this sensor in a platform for a fully functional urinary incontinence detection system.

  10. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  11. Development of an equipment diagnostic system that evaluates sensor drift

    International Nuclear Information System (INIS)

    Kanada, Masaki; Arita, Setsuo; Tada, Nobuo; Yokota, Katsuo

    2011-01-01

    The importance of condition monitoring technology for equipment has increased with the introduction of condition-based maintenance in nuclear power plants. We are developing a diagnostic system using process signals for plant equipment, such as pumps and motors. It is important to enable the diagnostic system to distinguish sensor drift and equipment failure. We have developed a sensor drift diagnostic method that combines some highly correlative sensor signals by using the MT (Mahalanobis-Taguchi) method. Furthermore, we have developed an equipment failure diagnostic method that measures the Mahalanobis distance from the normal state of equipment by the MT method. These methods can respectively detect sensor drift and equipment failure, but there are the following problems. In the sensor drift diagnosis, there is a possibility of misjudging the sensor drift when the equipment failure occurs and the process signal changes because the behavior of the process signal is the same as that of the sensor drift. Oppositely, in the equipment failure diagnosis, there is a possibility of misjudging the equipment failure when the sensor drift occurs because the sensor drift influences the change of process signal. To solve these problems, we propose a diagnostic method combining the sensor drift diagnosis and the equipment failure diagnosis by the MT method. Firstly, the sensor drift values are estimated by the sensor drift diagnosis, and the sensor drift is removed from the process signal. It is necessary to judge the validity of the estimated sensor drift values before removing the sensor drift from the process signal. We developed a method for judging the validity of the estimated sensor drift values by using the drift distribution based on the sensor calibration data. And then, the equipment failure is diagnosed by using the process signals after removal of the sensor drifts. To verify the developed diagnostic system, several sets of simulation data based on abnormal cases

  12. A smart sensor-based vision system: implementation and evaluation

    International Nuclear Information System (INIS)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R

    2006-01-01

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations

  13. A smart sensor-based vision system: implementation and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R [Institute of Fundamental Electronics, Bat. 220, Paris XI University, 91405 Orsay (France)

    2006-04-21

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations.

  14. Capillarity-based preparation system for optical colorimetric sensor arrays.

    Science.gov (United States)

    Luo, Xiao-Gang; Yi, Xin; Bu, Xiang-Nan; Hou, Chang-Jun; Huo, Dan-Qun; Yang, Mei; Fa, Huan-Bao; Lei, Jin-Can

    2017-03-01

    In recent years, optical colorimetric sensor arrays have demonstrated beneficial features, including rapid response, high selectivity, and high specificity; as a result, it has been extensively applied in food inspection and chemical studies, among other fields. There are instruments in the current market available for the preparation of an optical colorimetric sensor array, but it lacks the corresponding research of the preparation mechanism. Therefore, in connection with the main features of this kind of sensor array such as consistency, based on the preparation method of contact spotting, combined with a capillary fluid model, Washburn equation, Laplace equation, etc., this paper develops a diffusion model of an optical colorimetric sensor array during its preparation and sets up an optical colorimetric sensor array preparation system based on this diffusion model. Finally, this paper compares and evaluates the sensor arrays prepared by the system and prepared manually in three aspects such as the quality of array point, response of array, and response result, and the results show that the performance index of the sensor array prepared by a system under this diffusion model is better than that of the sensor array of manual spotting, which meets the needs of the experiment.

  15. CMOS-MEMS Chemiresistive and Chemicapacitive Chemical Sensor System

    Science.gov (United States)

    Lazarus, Nathan S.

    Integrating chemical sensors with testing electronics is a powerful technique with the potential to lower power and cost and allow for lower system limits of detection. This thesis explores the possibility of creating an integrated sensor system intended to be embedded within respirator cartridges to notify the user that hazardous chemicals will soon leak into the face mask. For a chemical sensor designer, this application is particularly challenging due to the need for a very sensitive and cheap sensor that will be exposed to widely varying environmental conditions during use. An octanethiol-coated gold nanoparticle chemiresistor to detect industrial solvents is developed, focusing on characterizing the environmental stability and limits of detection of the sensor. Since the chemiresistor was found to be highly sensitive to water vapor, a series of highly sensitive humidity sensor topologies were developed, with sensitivities several times previous integrated capacitive humidity sensors achieved. Circuit techniques were then explored to reduce the humidity sensor limits of detection, including the analysis of noise, charge injection, jitter and clock feedthrough in a charge-based capacitance measurement (CBCM) circuit and the design of a low noise Colpitts LC oscillator. The characterization of high resistance gold nanoclusters for capacitive chemical sensing was also performed. In the final section, a preconcentrator, a heater element intended to release a brief concentrated pulse of analate, was developed and tested for the purposes of lowering the system limit of detection.

  16. A highly sensitive underwater video system for use in turbid aquaculture ponds.

    Science.gov (United States)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C

    2016-08-24

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  17. Transparent Fingerprint Sensor System for Large Flat Panel Display

    Directory of Open Access Journals (Sweden)

    Wonkuk Seo

    2018-01-01

    Full Text Available In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO TFT sensor array and associated custom Read-Out IC (ROIC are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC. To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger’s ridges and valleys through the fingerprint sensor array.

  18. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  19. Active Sensing System with In Situ Adjustable Sensor Morphology

    Science.gov (United States)

    Nurzaman, Surya G.; Culha, Utku; Brodbeck, Luzius; Wang, Liyu; Iida, Fumiya

    2013-01-01

    Background Despite the widespread use of sensors in engineering systems like robots and automation systems, the common paradigm is to have fixed sensor morphology tailored to fulfill a specific application. On the other hand, robotic systems are expected to operate in ever more uncertain environments. In order to cope with the challenge, it is worthy of note that biological systems show the importance of suitable sensor morphology and active sensing capability to handle different kinds of sensing tasks with particular requirements. Methodology This paper presents a robotics active sensing system which is able to adjust its sensor morphology in situ in order to sense different physical quantities with desirable sensing characteristics. The approach taken is to use thermoplastic adhesive material, i.e. Hot Melt Adhesive (HMA). It will be shown that the thermoplastic and thermoadhesive nature of HMA enables the system to repeatedly fabricate, attach and detach mechanical structures with a variety of shape and size to the robot end effector for sensing purposes. Via active sensing capability, the robotic system utilizes the structure to physically probe an unknown target object with suitable motion and transduce the arising physical stimuli into information usable by a camera as its only built-in sensor. Conclusions/Significance The efficacy of the proposed system is verified based on two results. Firstly, it is confirmed that suitable sensor morphology and active sensing capability enables the system to sense different physical quantities, i.e. softness and temperature, with desirable sensing characteristics. Secondly, given tasks of discriminating two visually indistinguishable objects with respect to softness and temperature, it is confirmed that the proposed robotic system is able to autonomously accomplish them. The way the results motivate new research directions which focus on in situ adjustment of sensor morphology will also be discussed. PMID:24416094

  20. Active sensing system with in situ adjustable sensor morphology.

    Science.gov (United States)

    Nurzaman, Surya G; Culha, Utku; Brodbeck, Luzius; Wang, Liyu; Iida, Fumiya

    2013-01-01

    Despite the widespread use of sensors in engineering systems like robots and automation systems, the common paradigm is to have fixed sensor morphology tailored to fulfill a specific application. On the other hand, robotic systems are expected to operate in ever more uncertain environments. In order to cope with the challenge, it is worthy of note that biological systems show the importance of suitable sensor morphology and active sensing capability to handle different kinds of sensing tasks with particular requirements. This paper presents a robotics active sensing system which is able to adjust its sensor morphology in situ in order to sense different physical quantities with desirable sensing characteristics. The approach taken is to use thermoplastic adhesive material, i.e. Hot Melt Adhesive (HMA). It will be shown that the thermoplastic and thermoadhesive nature of HMA enables the system to repeatedly fabricate, attach and detach mechanical structures with a variety of shape and size to the robot end effector for sensing purposes. Via active sensing capability, the robotic system utilizes the structure to physically probe an unknown target object with suitable motion and transduce the arising physical stimuli into information usable by a camera as its only built-in sensor. The efficacy of the proposed system is verified based on two results. Firstly, it is confirmed that suitable sensor morphology and active sensing capability enables the system to sense different physical quantities, i.e. softness and temperature, with desirable sensing characteristics. Secondly, given tasks of discriminating two visually indistinguishable objects with respect to softness and temperature, it is confirmed that the proposed robotic system is able to autonomously accomplish them. The way the results motivate new research directions which focus on in situ adjustment of sensor morphology will also be discussed.

  1. Geographic Video 3d Data Model And Retrieval

    Science.gov (United States)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  2. Design of a system based on DSP and FPGA for video recording and replaying

    Science.gov (United States)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA

  3. Miniaturized, low power FGMOSFET radiation sensor and wireless dosimeter system

    KAUST Repository

    Arsalan, Muhammad; Shamim, Atif; Tarr, Nicholas Garry; Roy, Langis

    2013-01-01

    A miniaturized floating gate (FG) MOSFET radiation sensor system is disclosed, The sensor preferably comprises a matched pair of sensor and reference FGMOSFETs wherein the sensor FGMOSFET has a larger area floating gate with an extension over a field oxide layer, for accumulation of charge and increased sensitivity. Elimination of a conventional control gate and injector gate reduces capacitance, and increases sensitivity, and allows for fabrication using standard low cost CMOS technology. A sensor system may be provided with integrated signal processing electronics, for monitoring a change in differential channel current I.sub.D, indicative of radiation dose, and an integrated negative bias generator for automatic pre-charging from a low voltage power source. Optionally, the system may be coupled to a wireless transmitter. A compact wireless sensor System on Package solution is presented, suitable for dosimetry for radiotherapy or other biomedical applications.

  4. Miniaturized, low power FGMOSFET radiation sensor and wireless dosimeter system

    KAUST Repository

    Arsalan, Muhammad

    2013-08-27

    A miniaturized floating gate (FG) MOSFET radiation sensor system is disclosed, The sensor preferably comprises a matched pair of sensor and reference FGMOSFETs wherein the sensor FGMOSFET has a larger area floating gate with an extension over a field oxide layer, for accumulation of charge and increased sensitivity. Elimination of a conventional control gate and injector gate reduces capacitance, and increases sensitivity, and allows for fabrication using standard low cost CMOS technology. A sensor system may be provided with integrated signal processing electronics, for monitoring a change in differential channel current I.sub.D, indicative of radiation dose, and an integrated negative bias generator for automatic pre-charging from a low voltage power source. Optionally, the system may be coupled to a wireless transmitter. A compact wireless sensor System on Package solution is presented, suitable for dosimetry for radiotherapy or other biomedical applications.

  5. Air to fuel ratio sensor for internal combustion engine control system; Nainen kikan no nensho seigyoyo kunen hi sensor

    Energy Technology Data Exchange (ETDEWEB)

    Tsuzuki, M.; Kawai, T.; Yamada, T.; Nishio [NGK Spark Plug Co. Ltd., Aichi (Japan)

    1998-06-01

    Air to fuel ratio sensor is used for emission control system of three-way catalyst, and constitutes the important functional part of combustion control system. For further precise combustion control application, universal air to fuel ratio heated exhaust gas oxygen sensor (UEGO sensor) has been developed. This paper introduces heater control system for constant element temperature of UEGO sensor. By the heater wattage feedback control of sensing cell impedance, the change of sensor element temperature is decreased. 9 refs., 13 figs.

  6. Evaluation of the educational value of YouTube videos about physical examination of the cardiovascular and respiratory systems.

    Science.gov (United States)

    Azer, Samy A; Algrain, Hala A; AlKhelaif, Rana A; AlEshaiwi, Sarah M

    2013-11-13

    A number of studies have evaluated the educational contents of videos on YouTube. However, little analysis has been done on videos about physical examination. This study aimed to analyze YouTube videos about physical examination of the cardiovascular and respiratory systems. It was hypothesized that the educational standards of videos on YouTube would vary significantly. During the period from November 2, 2011 to December 2, 2011, YouTube was searched by three assessors for videos covering the clinical examination of the cardiovascular and respiratory systems. For each video, the following information was collected: title, authors, duration, number of viewers, and total number of days on YouTube. Using criteria comprising content, technical authority, and pedagogy parameters, videos were rated independently by three assessors and grouped into educationally useful and non-useful videos. A total of 1920 videos were screened. Only relevant videos covering the examination of adults in the English language were identified (n=56). Of these, 20 were found to be relevant to cardiovascular examinations and 36 to respiratory examinations. Further analysis revealed that 9 provided useful information on cardiovascular examinations and 7 on respiratory examinations: scoring mean 14.9 (SD 0.33) and mean 15.0 (SD 0.00), respectively. The other videos, 11 covering cardiovascular and 29 on respiratory examinations, were not useful educationally, scoring mean 11.1 (SD 1.08) and mean 11.2 (SD 1.29), respectively. The differences between these two categories were significant (P.86. A small number of videos about physical examination of the cardiovascular and respiratory systems were identified as educationally useful; these videos can be used by medical students for independent learning and by clinical teachers as learning resources. The scoring system utilized by this study is simple, easy to apply, and could be used by other researchers on similar topics.

  7. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  8. Wireless energizing system for an automated implantable sensor

    Energy Technology Data Exchange (ETDEWEB)

    Swain, Biswaranjan; Nayak, Praveen P.; Kar, Durga P.; Bhuyan, Satyanarayan; Mishra, Laxmi P. [Department of Electronics and Instrumentation Engineering, Siksha ‘O’ Anusandhan University, Bhubaneswar 751030 (India)

    2016-07-15

    The wireless drive of an automated implantable electronic sensor has been explored for health monitoring applications. The proposed system comprises of an automated biomedical sensing system which is energized through resonant inductive coupling. The implantable sensor unit is able to monitor the body temperature parameter and sends back the corresponding telemetry data wirelessly to the data recoding unit. It has been observed that the wireless power delivery system is capable of energizing the automated biomedical implantable electronic sensor placed over a distance of 3 cm from the power transmitter with an energy transfer efficiency of 26% at the operating resonant frequency of 562 kHz. This proposed method ensures real-time monitoring of different human body temperatures around the clock. The monitored temperature data have been compared with a calibrated temperature measurement system to ascertain the accuracy of the proposed system. The investigated technique can also be useful for monitoring other body parameters such as blood pressure, bladder pressure, and physiological signals of the patient in vivo using various implantable sensors.

  9. Wireless energizing system for an automated implantable sensor

    International Nuclear Information System (INIS)

    Swain, Biswaranjan; Nayak, Praveen P.; Kar, Durga P.; Bhuyan, Satyanarayan; Mishra, Laxmi P.

    2016-01-01

    The wireless drive of an automated implantable electronic sensor has been explored for health monitoring applications. The proposed system comprises of an automated biomedical sensing system which is energized through resonant inductive coupling. The implantable sensor unit is able to monitor the body temperature parameter and sends back the corresponding telemetry data wirelessly to the data recoding unit. It has been observed that the wireless power delivery system is capable of energizing the automated biomedical implantable electronic sensor placed over a distance of 3 cm from the power transmitter with an energy transfer efficiency of 26% at the operating resonant frequency of 562 kHz. This proposed method ensures real-time monitoring of different human body temperatures around the clock. The monitored temperature data have been compared with a calibrated temperature measurement system to ascertain the accuracy of the proposed system. The investigated technique can also be useful for monitoring other body parameters such as blood pressure, bladder pressure, and physiological signals of the patient in vivo using various implantable sensors.

  10. Wireless energizing system for an automated implantable sensor.

    Science.gov (United States)

    Swain, Biswaranjan; Nayak, Praveen P; Kar, Durga P; Bhuyan, Satyanarayan; Mishra, Laxmi P

    2016-07-01

    The wireless drive of an automated implantable electronic sensor has been explored for health monitoring applications. The proposed system comprises of an automated biomedical sensing system which is energized through resonant inductive coupling. The implantable sensor unit is able to monitor the body temperature parameter and sends back the corresponding telemetry data wirelessly to the data recoding unit. It has been observed that the wireless power delivery system is capable of energizing the automated biomedical implantable electronic sensor placed over a distance of 3 cm from the power transmitter with an energy transfer efficiency of 26% at the operating resonant frequency of 562 kHz. This proposed method ensures real-time monitoring of different human body temperatures around the clock. The monitored temperature data have been compared with a calibrated temperature measurement system to ascertain the accuracy of the proposed system. The investigated technique can also be useful for monitoring other body parameters such as blood pressure, bladder pressure, and physiological signals of the patient in vivo using various implantable sensors.

  11. A novel design of an automatic lighting control system for a wireless sensor network with increased sensor lifetime and reduced sensor numbers.

    Science.gov (United States)

    Mohamaddoust, Reza; Haghighat, Abolfazl Toroghi; Sharif, Mohamad Javad Motahari; Capanni, Niccolo

    2011-01-01

    Wireless sensor networks (WSN) are currently being applied to energy conservation applications such as light control. We propose a design for such a system called a lighting automatic control system (LACS). The LACS system contains a centralized or distributed architecture determined by application requirements and space usage. The system optimizes the calculations and communications for lighting intensity, incorporates user illumination requirements according to their activities and performs adjustments based on external lighting effects in external sensor and external sensor-less architectures. Methods are proposed for reducing the number of sensors required and increasing the lifetime of those used, for considerably reduced energy consumption. Additionally we suggest methods for improving uniformity of illuminance distribution on a workplane's surface, which improves user satisfaction. Finally simulation results are presented to verify the effectiveness of our design.

  12. View Synthesis for Advanced 3D Video Systems

    Directory of Open Access Journals (Sweden)

    2009-02-01

    Full Text Available Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.

  13. View Synthesis for Advanced 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Müller Karsten

    2008-01-01

    Full Text Available Abstract Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.

  14. Patient Posture Monitoring System Based on Flexible Sensors

    Directory of Open Access Journals (Sweden)

    Youngsu Cha

    2017-03-01

    Full Text Available Monitoring patients using vision cameras can cause privacy intrusion problems. In this paper, we propose a patient position monitoring system based on a patient cloth with unobtrusive sensors. We use flexible sensors based on polyvinylidene fluoride, which is a flexible piezoelectric material. Theflexiblesensorsareinsertedintopartsclosetothekneeandhipoftheloosepatientcloth. We measure electrical signals from the sensors caused by the piezoelectric effect when the knee and hip in the cloth are bent. The measured sensor outputs are transferred to a computer via Bluetooth. We use a custom-made program to detect the position of the patient through a rule-based algorithm and the sensor outputs. The detectable postures are based on six human motions in and around a bed. The proposed system can detect the patient positions with a success rate over 88 percent for three patients.

  15. Sensors management in robotic neurosurgery: the ROBOCAST project.

    Science.gov (United States)

    Vaccarella, Alberto; Comparetti, Mirko Daniele; Enquobahrie, Andinet; Ferrigno, Giancarlo; De Momi, Elena

    2011-01-01

    Robot and computer-aided surgery platforms bring a variety of sensors into the operating room. These sensors generate information to be synchronized and merged for improving the accuracy and the safety of the surgical procedure for both patients and operators. In this paper, we present our work on the development of a sensor management architecture that is used is to gather and fuse data from localization systems, such as optical and electromagnetic trackers and ultrasound imaging devices. The architecture follows a modular client-server approach and was implemented within the EU-funded project ROBOCAST (FP7 ICT 215190). Furthermore it is based on very well-maintained open-source libraries such as OpenCV and Image-Guided Surgery Toolkit (IGSTK), which are supported from a worldwide community of developers and allow a significant reduction of software costs. We conducted experiments to evaluate the performance of the sensor manager module. We computed the response time needed for a client to receive tracking data or video images, and the time lag between synchronous acquisition with an optical tracker and ultrasound machine. Results showed a median delay of 1.9 ms for a client request of tracking data and about 40 ms for US images; these values are compatible with the data generation rate (20-30 Hz for tracking system and 25 fps for PAL video). Simultaneous acquisitions have been performed with an optical tracking system and US imaging device: data was aligned according to the timestamp associated with each sample and the delay was estimated with a cross-correlation study. A median value of 230 ms delay was calculated showing that realtime 3D reconstruction is not feasible (an offline temporal calibration is needed), although a slow exploration is possible. In conclusion, as far as asleep patient neurosurgery is concerned, the proposed setup is indeed useful for registration error correction because the brain shift occurs with a time constant of few tens of minutes.

  16. Video monitoring system for enriched uranium casting furnaces

    International Nuclear Information System (INIS)

    Turner, P.C.

    1978-03-01

    A closed-circuit television (CCTV) system was developed to upgrade the remote-viewing capability on two oralloy (highly enriched uranium) casting furnaces in the Y-12 Plant. A silicon vidicon CCTV camera with a remotely controlled lens and infrared filtering was provided to yield a good-quality video presentation of the furnace crucible as the oralloy material is heated from 25 to 1300 0 C. Existing tube-type CCTV monochrome monitors were replaced with solid-state monitors to increase the system reliability

  17. The Video Collaborative Localization of a Miner’s Lamp Based on Wireless Multimedia Sensor Networks for Underground Coal Mines

    Directory of Open Access Journals (Sweden)

    Kaiming You

    2015-09-01

    Full Text Available Based on wireless multimedia sensor networks (WMSNs deployed in an underground coal mine, a miner’s lamp video collaborative localization algorithm was proposed to locate miners in the scene of insufficient illumination and bifurcated structures of underground tunnels. In bifurcation area, several camera nodes are deployed along the longitudinal direction of tunnels, forming a collaborative cluster in wireless way to monitor and locate miners in underground tunnels. Cap-lamps are regarded as the feature of miners in the scene of insufficient illumination of underground tunnels, which means that miners can be identified by detecting their cap-lamps. A miner’s lamp will project mapping points on the imaging plane of collaborative cameras and the coordinates of mapping points are calculated by collaborative cameras. Then, multiple straight lines between the positions of collaborative cameras and their corresponding mapping points are established. To find the three-dimension (3D coordinate location of the miner’s lamp a least square method is proposed to get the optimal intersection of the multiple straight lines. Tests were carried out both in a corridor and a realistic scenario of underground tunnel, which show that the proposed miner’s lamp video collaborative localization algorithm has good effectiveness, robustness and localization accuracy in real world conditions of underground tunnels.

  18. The Video Collaborative Localization of a Miner’s Lamp Based on Wireless Multimedia Sensor Networks for Underground Coal Mines

    Science.gov (United States)

    You, Kaiming; Yang, Wei; Han, Ruisong

    2015-01-01

    Based on wireless multimedia sensor networks (WMSNs) deployed in an underground coal mine, a miner’s lamp video collaborative localization algorithm was proposed to locate miners in the scene of insufficient illumination and bifurcated structures of underground tunnels. In bifurcation area, several camera nodes are deployed along the longitudinal direction of tunnels, forming a collaborative cluster in wireless way to monitor and locate miners in underground tunnels. Cap-lamps are regarded as the feature of miners in the scene of insufficient illumination of underground tunnels, which means that miners can be identified by detecting their cap-lamps. A miner’s lamp will project mapping points on the imaging plane of collaborative cameras and the coordinates of mapping points are calculated by collaborative cameras. Then, multiple straight lines between the positions of collaborative cameras and their corresponding mapping points are established. To find the three-dimension (3D) coordinate location of the miner’s lamp a least square method is proposed to get the optimal intersection of the multiple straight lines. Tests were carried out both in a corridor and a realistic scenario of underground tunnel, which show that the proposed miner’s lamp video collaborative localization algorithm has good effectiveness, robustness and localization accuracy in real world conditions of underground tunnels. PMID:26426023

  19. Development of sensor system for indoor location based service implementation

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Joo Heon; Lee, Kyung Ho [Kookmin Univ., Seoul (Korea, Republic of)

    2012-11-15

    This paper introduces a sensor system based on indoor locations in order to implement the Building Energy Management System. This system consists of a thermopile sensor and an ultrasonic sensor. The sensor module is rotated by 360 .deg. and yawed up and down by two electric motors. Therefore, it can simultaneously detect the number and location of the inhabitants in the room. It uses wireless technology to communicate with the building manager or the smart home server, and it can save electric energy by controlling the lighting system or heating/air conditioning equipment automatically. We also demonstrate the usefulness of the proposed system by applying it to a real environment.

  20. Development of sensor system for indoor location based service implementation

    International Nuclear Information System (INIS)

    Cha, Joo Heon; Lee, Kyung Ho

    2012-01-01

    This paper introduces a sensor system based on indoor locations in order to implement the Building Energy Management System. This system consists of a thermopile sensor and an ultrasonic sensor. The sensor module is rotated by 360 .deg. and yawed up and down by two electric motors. Therefore, it can simultaneously detect the number and location of the inhabitants in the room. It uses wireless technology to communicate with the building manager or the smart home server, and it can save electric energy by controlling the lighting system or heating/air conditioning equipment automatically. We also demonstrate the usefulness of the proposed system by applying it to a real environment

  1. Compact, self-contained enhanced-vision system (EVS) sensor simulator

    Science.gov (United States)

    Tiana, Carlo

    2007-04-01

    We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.

  2. Video-Based Big Data Analytics in Cyberlearning

    Science.gov (United States)

    Wang, Shuangbao; Kelly, William

    2017-01-01

    In this paper, we present a novel system, inVideo, for video data analytics, and its use in transforming linear videos into interactive learning objects. InVideo is able to analyze video content automatically without the need for initial viewing by a human. Using a highly efficient video indexing engine we developed, the system is able to analyze…

  3. Electron beam diagnostic system using computed tomography and an annular sensor

    Science.gov (United States)

    Elmer, John W.; Teruya, Alan T.

    2014-07-29

    A system for analyzing an electron beam including a circular electron beam diagnostic sensor adapted to receive the electron beam, the circular electron beam diagnostic sensor having a central axis; an annular sensor structure operatively connected to the circular electron beam diagnostic sensor, wherein the sensor structure receives the electron beam; a system for sweeping the electron beam radially outward from the central axis of the circular electron beam diagnostic sensor to the annular sensor structure wherein the electron beam is intercepted by the annular sensor structure; and a device for measuring the electron beam that is intercepted by the annular sensor structure.

  4. A multi-agent system architecture for sensor networks.

    Science.gov (United States)

    Fuentes-Fernández, Rubén; Guijarro, María; Pajares, Gonzalo

    2009-01-01

    The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work.

  5. Wireless Integrated Microelectronic Vacuum Sensor System

    Science.gov (United States)

    Krug, Eric; Philpot, Brian; Trott, Aaron; Lawrence, Shaun

    2013-01-01

    NASA Stennis Space Center's (SSC's) large rocket engine test facility requires the use of liquid propellants, including the use of cryogenic fluids like liquid hydrogen as fuel, and liquid oxygen as an oxidizer (gases which have been liquefied at very low temperatures). These fluids require special handling, storage, and transfer technology. The biggest problem associated with transferring cryogenic liquids is product loss due to heat transfer. Vacuum jacketed piping is specifically designed to maintain high thermal efficiency so that cryogenic liquids can be transferred with minimal heat transfer. A vacuum jacketed pipe is essentially two pipes in one. There is an inner carrier pipe, in which the cryogenic liquid is actually transferred, and an outer jacket pipe that supports and seals the vacuum insulation, forming the "vacuum jacket." The integrity of the vacuum jacketed transmission lines that transfer the cryogenic fluid from delivery barges to the test stand must be maintained prior to and during engine testing. To monitor the vacuum in these vacuum jacketed transmission lines, vacuum gauge readings are used. At SSC, vacuum gauge measurements are done on a manual rotation basis with two technicians, each using a handheld instrument. Manual collection of vacuum data is labor intensive and uses valuable personnel time. Additionally, there are times when personnel cannot collect the data in a timely fashion (i.e., when a leak is detected, measurements must be taken more often). Additionally, distribution of this data to all interested parties can be cumbersome. To simplify the vacuum-gauge data collection process, automate the data collection, and decrease the labor costs associated with acquiring these measurements, an automated system that monitors the existing gauges was developed by Invocon, Inc. For this project, Invocon developed a Wireless Integrated Microelectronic Vacuum Sensor System (WIMVSS) that provides the ability to gather vacuum

  6. Sense, decide, act, communicate (SDAC): next generation of smart sensor systems

    Science.gov (United States)

    Berry, Nina; Davis, Jesse; Ko, Teresa H.; Kyker, Ron; Pate, Ron; Stark, Doug; Stinnett, Regan; Baker, James; Cushner, Adam; Van Dyke, Colin; Kyckelhahn, Brian

    2004-09-01

    The recent war on terrorism and increased urban warfare has been a major catalysis for increased interest in the development of disposable unattended wireless ground sensors. While the application of these sensors to hostile domains has been generally governed by specific tasks, this research explores a unique paradigm capitalizing on the fundamental functionality related to sensor systems. This functionality includes a sensors ability to Sense - multi-modal sensing of environmental events, Decide - smart analysis of sensor data, Act - response to environmental events, and Communication - internal to system and external to humans (SDAC). The main concept behind SDAC sensor systems is to integrate the hardware, software, and networking to generate 'knowledge and not just data'. This research explores the usage of wireless SDAC units to collectively make up a sensor system capable of persistent, adaptive, and autonomous behavior. These systems are base on the evaluation of scenarios and existing systems covering various domains. This paper presents a promising view of sensor network characteristics, which will eventually yield smart (intelligent collectives) network arrays of SDAC sensing units generally applicable to multiple related domains. This paper will also discuss and evaluate the demonstration system developed to test the concepts related to SDAC systems.

  7. A field-deployable, aircraft-mounted sensor for the environmental survey of radionuclides

    International Nuclear Information System (INIS)

    Lepel, E.A.; Geelhood, B.D.; Hensley, W.K.; Quam, W.M.

    1998-01-01

    The Environmental Radionuclide Sensor System (ERSS) 3 is an extremely sensitive sensor, which has been cooperatively developed by Pacific Northwest National Laboratory (PNNL) and Special Technologies Laboratory (STL) for environmental surveys of radionuclides. The ERSS sensors fit in an airborne pod and include twenty High-Purity Germanium (HPGe) detectors for the high-resolution measurement of gamma-ray emitting radionuclides, twenty-four 3 He detectors for possible neutron measurements, and two video cameras for visual correlation. These aerial HPGe sensors provide much better gamma-ray energy resolution than can be obtained with NaI(Tl) detectors. The associated electronics fit into three racks. The system can be powered by the 28 V DC electrical supply of typical aircraft or 120 V AC. The data acquisition hardware is controlled by customized software and a real-time display is provided. Each gamma-ray event is time stamped and stored for later analysis. This paper will present the physical design, discuss the software used to control the system, and provide some examples of its use. (author)

  8. State Estimation for Sensor Monitoring System with Uncertainty and Disturbance

    Directory of Open Access Journals (Sweden)

    Jianhong Sun

    2014-10-01

    Full Text Available This paper considers the state estimation problem for the sensor monitoring system which contains system uncertainty and nonlinear disturbance. In the sensor monitoring system, states of each inner sensor node usually contains system uncertainty, and external noise often works as nonlinear item. Besides, information transmission in the system is also time consuming. All mentioned above may arouse in unstable of the monitoring system. In this case, states of sensors could be wrongly sampled. Under this circumstance, a proper mathematical model is proposed and by the use of Lipschitz condition, the nonlinear item is transformed to linear one. In addition, we suppose that all sensor nodes are distributed arranged, no interface occurs with each other. By establishing proper Lyapunov– Krasovskii functional, sufficient conditions are acquired by solving linear matrix inequality to make the error augmented system stable, and the gains of observers are also derived. Finally, an illustrated example is given to show that system observed value tracks system states well, which fully demonstrate the effectiveness of our result.

  9. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    Science.gov (United States)

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High-Speed Video...Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras 5a. CONTRACT

  10. Novel Color Depth Mapping Imaging Sensor System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous and semi-autonomous robotic systems require information about their surroundings in order to navigate properly. A video camera machine vision system can...

  11. Novel Color Depth Mapping Imaging Sensor System, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous and semi-autonomous robotic systems require information about their surroundings in order to navigate properly. A video camera machine vision system can...

  12. Performance of a video-image-subtraction-based patient positioning system

    International Nuclear Information System (INIS)

    Milliken, Barrett D.; Rubin, Steven J.; Hamilton, Russell J.; Johnson, L. Scott; Chen, George T.Y.

    1997-01-01

    Purpose: We have developed and tested an interactive video system that utilizes image subtraction techniques to enable high precision patient repositioning using surface features. We report quantitative measurements of system performance characteristics. Methods and Materials: Video images can provide a high precision, low cost measure of patient position. Image subtraction techniques enable one to incorporate detailed information contained in the image of a carefully verified reference position into real-time images. We have developed a system using video cameras providing orthogonal images of the treatment setup. The images are acquired, processed and viewed using an inexpensive frame grabber and a PC. The subtraction images provide the interactive guidance needed to quickly and accurately place a patient in the same position for each treatment session. We describe the design and implementation of our system, and its quantitative performance, using images both to measure changes in position, and to achieve accurate setup reproducibility. Results: Under clinical conditions (60 cm field of view, 3.6 m object distance), the position of static, high contrast objects could be measured with a resolution of 0.04 mm (rms) in each of two dimensions. The two-dimensional position could be reproduced using the real-time image display with a resolution of 0.15 mm (rms). Two-dimensional measurement resolution of the head of a patient undergoing treatment for head and neck cancer was 0.1 mm (rms), using a lateral view, measuring the variation in position of the nose and the ear over the course of a single radiation treatment. Three-dimensional repositioning accuracy of the head of a healthy volunteer using orthogonal camera views was less than 0.7 mm (systematic error) with an rms variation of 1.2 mm. Setup adjustments based on the video images were typically performed within a few minutes. The higher precision achieved using the system to measure objects than to reposition

  13. Testing of a Wireless Sensor System for Instrumented Thermal Protection Systems

    Science.gov (United States)

    Kummer, Allen T.; Weir, Erik D.; Morris, Trey J.; Friedenberger, Corey W.; Singh, Aseem; Capuro, Robert M.; Bilen, Sven G.; Fu, Johnny; Swanson, Gregory T.; Hash, David B.

    2011-01-01

    Funded by NASA's Constellation Universities Institutes Project (CUIP), we have been developing and testing a system to wirelessly power and collect data from sensors on space platforms in general and, in particular, the harsh environment of spacecraft re-entry. The elimination of wires and associated failures such as chafing, sparking, ageing, and connector issues can increase reliability and design flexibility while reducing costs. These factors present an appealing case for the pursuit of wireless solutions for harsh environments, particularly for their use in space and on spacecraft. We have designed and built a prototype wireless sensor system. The system, with capabilities similar to that of a wired sensor system, was tested in NASA Ames Research Center s Aerodynamic Heating Facility and Interaction Heating Facility. This paper discusses the overall development effort, testing results, as well as future directions.

  14. Video dosimetry: evaluation of X-radiation dose by video fluoroscopic image

    International Nuclear Information System (INIS)

    Nova, Joao Luiz Leocadio da; Lopes, Ricardo Tadeu

    1996-01-01

    A new methodology to evaluate the entrance surface dose on patients under radiodiagnosis is presented. A phantom is used in video fluoroscopic procedures in on line video signal system. The images are obtained from a Siemens Polymat 50 and are digitalized. The results show that the entrance surface dose can be obtained in real time from video imaging

  15. A Novel Design of an Automatic Lighting Control System for a Wireless Sensor Network with Increased Sensor Lifetime and Reduced Sensor Numbers

    Science.gov (United States)

    Mohamaddoust, Reza; Haghighat, Abolfazl Toroghi; Sharif, Mohamad Javad Motahari; Capanni, Niccolo

    2011-01-01

    Wireless sensor networks (WSN) are currently being applied to energy conservation applications such as light control. We propose a design for such a system called a Lighting Automatic Control System (LACS). The LACS system contains a centralized or distributed architecture determined by application requirements and space usage. The system optimizes the calculations and communications for lighting intensity, incorporates user illumination requirements according to their activities and performs adjustments based on external lighting effects in external sensor and external sensor-less architectures. Methods are proposed for reducing the number of sensors required and increasing the lifetime of those used, for considerably reduced energy consumption. Additionally we suggest methods for improving uniformity of illuminance distribution on a workplane’s surface, which improves user satisfaction. Finally simulation results are presented to verify the effectiveness of our design. PMID:22164114

  16. A portable wireless power transmission system for video capsule endoscopes.

    Science.gov (United States)

    Shi, Yu; Yan, Guozheng; Zhu, Bingquan; Liu, Gang

    2015-01-01

    Wireless power transmission (WPT) technology can solve the energy shortage problem of the video capsule endoscope (VCE) powered by button batteries, but the fixed platform limited its clinical application. This paper presents a portable WPT system for VCE. Besides portability, power transfer efficiency and stability are considered as the main indexes of optimization design of the system, which consists of the transmitting coil structure, portable control box, operating frequency, magnetic core and winding of receiving coil. Upon the above principles, the correlation parameters are measured, compared and chosen. Finally, through experiments on the platform, the methods are tested and evaluated. In the gastrointestinal tract of small pig, the VCE is supplied with sufficient energy by the WPT system, and the energy conversion efficiency is 2.8%. The video obtained is clear with a resolution of 320×240 and a frame rate of 30 frames per second. The experiments verify the feasibility of design scheme, and further improvement direction is discussed.

  17. Selected examples of intelligent (micro) sensor systems: state-of-the-art and tendencies

    Science.gov (United States)

    Hauptmann, Peter R.

    2006-03-01

    The capability of intelligent sensors to have more intelligence built into them continues to drive their application in areas including automotive, aerospace and defense, industrial, intelligent house and wear, medical and homeland security. In principle it is difficult to overestimate the importance of intelligent (micro) sensors or sensor systems within advanced societies but one characteristic feature is the global market for sensors, which is now about 20 billion annually. Therefore sensors or sensor systems play a dominant role in many fields from the macro sensor in manufacturing industry down to the miniaturized sensor for medical applications. The diversity of sensors precludes a complete description of the state-of-the-art; selected examples will illustrate the current situation. MEMS (microelectromechanical systems) devices are of special interest in the context of micro sensor systems. In past the main requirements of a sensor were in terms of metrological performance. The electrical (or optical) signal produced by the sensor needed to match the measure relatively accurately. Such basic functionality is no longer sufficient. Data processing near the sensor, the extraction of more information than just the direct sensor information by signal analysis, system aspects and multi-sensor information are the new demands. A shifting can be observed away from aiming to design perfect single-function transducers and towards the utilization of system-based sensors as system components. In the ideal case such systems contain sensors, actuators and electronics. They can be realized in monolithic, hybrid or discrete form—which kind is used depends on the application. In this article the state-of-the-art of intelligent sensors or sensor systems is reviewed using selected examples. Future trends are deduced.

  18. The application of force-sensing resistor sensors for measuring forces developed by the human hand.

    Science.gov (United States)

    Nikonovas, A; Harrison, A J L; Hoult, S; Sammut, D

    2004-01-01

    Most attempts to measure forces developed by the human hand have been implemented by placing force sensors on the object of interaction. Other researchers have placed sensors just on the subject's fingertips. In this paper, a system is described that measures forces over the entire hand using thin-film sensors and associated electronics. This system was developed by the authors and is able to obtain force readings from up to 60 thin-film sensors at rates of up to 400 samples/s per sensor. The sensors can be placed anywhere on the palm and/or fingers of the hand. The sensor readings, together with a video stream containing information about hand posture, are logged into a portable computer using a multiplexer, analogue-to-digital converter and software developed for the purpose. The system has been successfully used to measure forces involved in a range of everyday tasks such as driving a vehicle, lifting saucepans and hitting a golf ball. In the latter case, results are compared with those from an instrumented golf club. Future applications include the assessment of hand strength following disease, trauma or surgery, and to enable quantitative ergonomic investigations.

  19. New optical sensor systems for high-resolution satellite, airborne and terrestrial imaging systems

    Science.gov (United States)

    Eckardt, Andreas; Börner, Anko; Lehmann, Frank

    2007-10-01

    The department of Optical Information Systems (OS) at the Institute of Robotics and Mechatronics of the German Aerospace Center (DLR) has more than 25 years experience with high-resolution imaging technology. The technology changes in the development of detectors, as well as the significant change of the manufacturing accuracy in combination with the engineering research define the next generation of spaceborne sensor systems focusing on Earth observation and remote sensing. The combination of large TDI lines, intelligent synchronization control, fast-readable sensors and new focal-plane concepts open the door to new remote-sensing instruments. This class of instruments is feasible for high-resolution sensor systems regarding geometry and radiometry and their data products like 3D virtual reality. Systemic approaches are essential for such designs of complex sensor systems for dedicated tasks. The system theory of the instrument inside a simulated environment is the beginning of the optimization process for the optical, mechanical and electrical designs. Single modules and the entire system have to be calibrated and verified. Suitable procedures must be defined on component, module and system level for the assembly test and verification process. This kind of development strategy allows the hardware-in-the-loop design. The paper gives an overview about the current activities at DLR in the field of innovative sensor systems for photogrammetric and remote sensing purposes.

  20. ENERGY EFFICIENT TRACKING SYSTEM USING WIRELESS SENSORS

    OpenAIRE

    Thankaselvi Kumaresan; Sheryl Mathias; Digja Khanvilkar; Prof. Smita Dange

    2014-01-01

    One of the most important applications of wireless sensor networks (WSNs) is surveillance system, which is used to track moving targets. WSN is composed of a large number of low cost sensors which operate on the power derived from batteries. Energy efficiency is an important issue in WSN, which determines the network lifetime. Due to the need for continuous monitoring with 100% efficiency, keeping all the sensor nodes active permanently leads to fast draining of batteries. Hen...

  1. Integration of video and radiation analysis data

    International Nuclear Information System (INIS)

    Menlove, H.O.; Howell, J.A.; Rodriguez, C.A.; Eccleston, G.W.; Beddingfield, D.; Smith, J.E.; Baumgart, C.W.

    1995-01-01

    For the past several years, the integration of containment and surveillance (C/S) with nondestructive assay (NDA) sensors for monitoring the movement of nuclear material has focused on the hardware and communications protocols in the transmission network. Little progress has been made in methods to utilize the combined C/S and NDA data for safeguards and to reduce the inspector time spent in nuclear facilities. One of the fundamental problems in the integration of the combined data is that the two methods operate in different dimensions. The C/S video data is spatial in nature; whereas, the NDA sensors provide radiation levels versus time data. The authors have introduced a new method to integrate spatial (digital video) with time (radiation monitoring) information. This technology is based on pattern recognition by neural networks, provides significant capability to analyze complex data, and has the ability to learn and adapt to changing situations. This technique has the potential of significantly reducing the frequency of inspection visits to key facilities without a loss of safeguards effectiveness

  2. Passive sensor systems for nuclear material monitoring

    International Nuclear Information System (INIS)

    Simpson, M.L.; Boatner, L.A.; Holcomb, D.E.; McElhaney, S.A.; Mihalczo, J.T.; Muhs, J.D.; Roberts, M.R.; Hill, N.W.

    1993-01-01

    Passive fiber optic sensor systems capable of confirming the presence of special nuclear materials in storage or process facilities are being developed at Oak Ridge National Laboratory (ORNL). These sensors provide completely passive, remote measurement capability. No power supplies, amplifiers, or other active components that could degrade system reliability are required at the sensor location. ORNL, through its research programs in scintillator materials, has developed a variety of materials for use in alpha-, beta-, gamma-, and neutron-sensitive scintillator detectors. In addition to sensors for measuring radiation flux, new sensor materials have been developed which are capable of measuring weight, temperature, and source location. An example of a passive sensor for temperature measurement is the combination of a thermophosphor (e.g., rare-earth activated Y 2 O 3 ) with 6 LiF (95% 6 Li). This combination results in a new class of scintillators for thermal neutrons that absorb energy from the radiation particles and remit the energy as a light pulse, the decay rate of which, over a specified temperature range, is temperature dependent. Other passive sensors being developed include pressure-sensitive triboluminescent materials, weight-sensitive silicone rubber fibers, scintillating fibers, and other materials for gamma and neutron detection. The light from the scintillator materials of each sensor would be sent through optical fibers to a monitoring station, where the attribute quantity could be measured and compared with previously recorded emission levels. Confirmatory measurement applications of these technologies are being evaluated to reduce the effort, costs, and employee exposures associated with inventorying stockpiles of highly enriched uranium at the Oak Ridge Y-12 Plant

  3. Video control system for a drilling in furniture workpiece

    Science.gov (United States)

    Khmelev, V. L.; Satarov, R. N.; Zavyalova, K. V.

    2018-05-01

    During last 5 years, Russian industry has being starting to be a robotic, therefore scientific groups got new tasks. One of new tasks is machine vision systems, which should solve problem of automatic quality control. This type of systems has a cost of several thousand dollars each. The price is impossible for regional small business. In this article, we describe principle and algorithm of cheap video control system, which one uses web-cameras and notebook or desktop computer as a computing unit.

  4. Systems and Sensors for Debris-flow Monitoring and Warning

    Directory of Open Access Journals (Sweden)

    Lorenzo Marchi

    2008-04-01

    Full Text Available Debris flows are a type of mass movement that occurs in mountain torrents. They consist of a high concentration of solid material in water that flows as a wave with a steep front. Debris flows can be considered a phenomenon intermediate between landslides and water floods. They are amongst the most hazardous natural processes in mountainous regions and may occur under different climatic conditions. Their destructiveness is due to different factors: their capability of transporting and depositing huge amounts of solid materials, which may also reach large sizes (boulders of several cubic meters are commonly transported by debris flows, their steep fronts, which may reach several meters of height and also their high velocities. The implementation of both structural and nonstructural control measures is often required when debris flows endanger routes, urban areas and other infrastructures. Sensor networks for debris-flow monitoring and warning play an important role amongst non-structural measures intended to reduce debris-flow risk. In particular, debris flow warning systems can be subdivided into two main classes: advance warning and event warning systems. These two classes employ different types of sensors. Advance warning systems are based on monitoring causative hydrometeorological processes (typically rainfall and aim to issue a warning before a possible debris flow is triggered. Event warning systems are based on detecting debris flows when these processes are in progress. They have a much smaller lead time than advance warning ones but are also less prone to false alarms. Advance warning for debris flows employs sensors and techniques typical of meteorology and hydrology, including measuring rainfall by means of rain gauges and weather radar and monitoring water discharge in headwater streams. Event warning systems use different types of sensors, encompassing ultrasonic or radar gauges, ground vibration sensors, videocameras, avalanche

  5. The everyday lives of video game developers: Experimentally understanding underlying systems/structures

    Directory of Open Access Journals (Sweden)

    Casey O'Donnell

    2009-03-01

    Full Text Available This essay examines how tensions between work and play for video game developers shape the worlds they create. The worlds of game developers, whose daily activity is linked to larger systems of experimentation and technoscientific practice, provide insights that transcend video game development work. The essay draws on ethnographic material from over 3 years of fieldwork with video game developers in the United States and India. It develops the notion of creative collaborative practice based on work in the fields of science and technology studies, game studies, and media studies. The importance of, the desire for, or the drive to understand underlying systems and structures has become fundamental to creative collaborative practice. I argue that the daily activity of game development embodies skills fundamental to creative collaborative practice and that these capabilities represent fundamental aspects of critical thought. Simultaneously, numerous interests have begun to intervene in ways that endanger these foundations of creative collaborative practice.

  6. A Miniaturized Video System for Monitoring Drosophila Behavior

    Science.gov (United States)

    Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana

    2011-01-01

    populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies.

  7. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  8. Active Multimodal Sensor System for Target Recognition and Tracking.

    Science.gov (United States)

    Qu, Yufu; Zhang, Guirong; Zou, Zhaofan; Liu, Ziyue; Mao, Jiansen

    2017-06-28

    High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system.

  9. Integrated tunneling sensor for nanoelectromechanical systems

    DEFF Research Database (Denmark)

    Sadewasser, S.; Abadal, G.; Barniol, N.

    2006-01-01

    Transducers based on quantum mechanical tunneling provide an extremely sensitive sensor principle, especially for nanoelectromechanical systems. For proper operation a gap between the electrodes of below 1 nm is essential, requiring the use of structures with a mobile electrode. At such small...... distances, attractive van der Waals and capillary forces become sizable, possibly resulting in snap-in of the electrodes. The authors present a comprehensive analysis and evaluation of the interplay between the involved forces and identify requirements for the design of tunneling sensors. Based...... on this analysis, a tunneling sensor is fabricated by Si micromachining technology and its proper operation is demonstrated. (c) 2006 American Institute of Physics....

  10. Satellite markers: a simple method for ground truth car pose on stereo video

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  11. A Multi-Agent System Architecture for Sensor Networks

    Directory of Open Access Journals (Sweden)

    María Guijarro

    2009-12-01

    Full Text Available The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work.

  12. Digitized video subject positioning and surveillance system for PET

    International Nuclear Information System (INIS)

    Picard, Y.; Thompson, C.J.

    1995-01-01

    Head motion is a significant contribution to the degradation of image quality of Positron Emission Tomography (PET) studies. Images from different studies must also be realigned digitally to be correlated when the subject position has changed. These constraints could be eliminated if the subject's head position could be monitored accurately. The authors have developed a video camera-based surveillance system to monitor the head position and motion of subjects undergoing PET studies. The system consists of two CCD (charge-coupled device) cameras placed orthogonally such that both face and profile views of the subject's head are displayed side by side on an RGB video monitor. Digitized images overlay the live images in contrasting colors on the monitor. Such a system can be used to (1) position the subject in the field of view (FOV) by displaying the position of the scanner's slices on the monitor along with the current subject position, (2) monitor head motion and alert the operator of any motion during the study and (3) reposition the subject accurately for subsequent studies by displaying the previous position along with the current position in a contrasting color

  13. A mobile mapping system for hazardous facilities

    International Nuclear Information System (INIS)

    Barry, R.E.; Jones, J.P.; Little, C.Q.; Wilson, C.W.

    1997-01-01

    The Mobile Mapping System (MMS) is a completely self-contained vehicle with omnidirectional capability and extremely good odometry, capable of operation up to 12 hours between battery charges. The platform itself is based on a dual differential drive system with a compliant linkage between the two drive systems. This compliant linkage allows for low-level controller errors to be absorbed by the system and their navigational effects to be compensated for, yielding an extremely accurate navigational capability. Vehicle design also allows for a considerable payload (250 lb) and a large surface area for auxiliary equipment mounting (2 by 6 ft). The vehicle supports remote operation by reading commands and writing replies through its serial communications port. Use of a radio-ethernet and a radio-video channel allow for remote video and communications links to be maintained with the vehicle in many remote operation environments. The MMS uses a structured light system to quickly acquire coarse range images of the environment and a coherent laser radar (CLR) to acquire finer resolution range images. The coherent laser radar can also be used to determine platform position and orientation to millimeter accuracies if targets of known. Sensor range image data as well as video are off loaded to a remote computer for postprocessing, display, and archiving. Diagrams and images below include an image of the MMS vehicle before addition of sensors, diagram of vehicle with sensors, and computer system connections

  14. Image and video based remote target localization and tracking on smartphones

    Science.gov (United States)

    Wang, Qia; Lobzhanidze, Alex; Jang, Hyun; Zeng, Wenjun; Shang, Yi; Yang, Jingyu

    2012-06-01

    Smartphones are becoming popular nowadays not only because of its communication functionality but also, more importantly, its powerful sensing and computing capability. In this paper, we describe a novel and accurate image and video based remote target localization and tracking system using the Android smartphones, by leveraging its built-in sensors such as camera, digital compass, GPS, etc. Even though many other distance estimation or localization devices are available, our all-in-one, easy-to-use localization and tracking system on low cost and commodity smartphones is first of its kind. Furthermore, smartphones' exclusive user-friendly interface has been effectively taken advantage of by our system to facilitate low complexity and high accuracy. Our experimental results show that our system works accurately and efficiently.

  15. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Naoki Wakamiya

    2010-08-01

    Full Text Available A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  16. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  17. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  18. Automated Indexing and Search of Video Data in Large Collections with inVideo

    Directory of Open Access Journals (Sweden)

    Shuangbao Paul Wang

    2017-08-01

    Full Text Available In this paper, we present a novel system, inVideo, for automatically indexing and searching videos based on the keywords spoken in the audio track and the visual content of the video frames. Using the highly efficient video indexing engine we developed, inVideo is able to analyze videos using machine learning and pattern recognition without the need for initial viewing by a human. The time-stamped commenting and tagging features refine the accuracy of search results. The cloud-based implementation makes it possible to conduct elastic search, augmented search, and data analytics. Our research shows that inVideo presents an efficient tool in processing and analyzing videos and increasing interactions in video-based online learning environment. Data from a cybersecurity program with more than 500 students show that applying inVideo to current video material, interactions between student-student and student-faculty increased significantly across 24 sections program-wide.

  19. Bayesian based design of real-time sensor systems for high-risk indoor contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Sreedharan, Priya [Univ. of California, Berkeley, CA (United States)

    2007-01-01

    The sudden release of toxic contaminants that reach indoor spaces can be hazardousto building occupants. To respond effectively, the contaminant release must be quicklydetected and characterized to determine unobserved parameters, such as release locationand strength. Characterizing the release requires solving an inverse problem. Designinga robust real-time sensor system that solves the inverse problem is challenging becausethe fate and transport of contaminants is complex, sensor information is limited andimperfect, and real-time estimation is computationally constrained.This dissertation uses a system-level approach, based on a Bayes Monte Carloframework, to develop sensor-system design concepts and methods. I describe threeinvestigations that explore complex relationships among sensors, network architecture,interpretation algorithms, and system performance. The investigations use data obtainedfrom tracer gas experiments conducted in a real building. The influence of individual sensor characteristics on the sensor-system performance for binary-type contaminant sensors is analyzed. Performance tradeoffs among sensor accuracy, threshold level and response time are identified; these attributes could not be inferred without a system-level analysis. For example, more accurate but slower sensors are found to outperform less accurate but faster sensors. Secondly, I investigate how the sensor-system performance can be understood in terms of contaminant transport processes and the model representation that is used to solve the inverse problem. The determination of release location and mass are shown to be related to and constrained by transport and mixing time scales. These time scales explain performance differences among different sensor networks. For example, the effect of longer sensor response times is comparably less for releases with longer mixing time scales. The third investigation explores how information fusion from heterogeneous sensors may improve the sensor-system

  20. Short-term change detection for UAV video

    Science.gov (United States)

    Saur, Günter; Krüger, Wolfgang

    2012-11-01

    In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer

  1. Online Sensor Calibration Assessment in Nuclear Power Systems

    International Nuclear Information System (INIS)

    Coble, Jamie B.; Ramuhalli, Pradeep; Meyer, Ryan M.; Hashemian, Hash

    2013-01-01

    Safe, efficient, and economic operation of nuclear systems (nuclear power plants, fuel fabrication and storage, used fuel processing, etc.) relies on transmission of accurate and reliable measurements. During operation, sensors degrade due to age, environmental exposure, and maintenance interventions. Sensor degradation can affect the measured and transmitted signals, including sensor failure, signal drift, sensor response time, etc. Currently, periodic sensor recalibration is performed to avoid these problems. Sensor recalibration activities include both calibration assessment and adjustment (if necessary). In nuclear power plants, periodic recalibration of safety-related sensors is required by the plant technical specifications. Recalibration typically occurs during refueling outages (about every 18 to 24 months). Non-safety-related sensors also undergo recalibration, though not as frequently. However, this approach to maintaining sensor calibration and performance is time-consuming and expensive, leading to unnecessary maintenance, increased radiation exposure to maintenance personnel, and potential damage to sensors. Online monitoring (OLM) of sensor performance is a non-invasive approach to assess instrument calibration. OLM can mitigate many of the limitations of the current periodic recalibration practice by providing more frequent assessment of calibration and identifying those sensors that are operating outside of calibration tolerance limits without removing sensors or interrupting operation. This can support extended operating intervals for unfaulted sensors and target recalibration efforts to only degraded sensors

  2. Micro optical sensor systems for sunsensing applications

    Science.gov (United States)

    Leijtens, Johan; de Boom, Kees

    2017-11-01

    Optimum application of micro system technologies allows building small sensor systems that will alter procurement strategies for spacecraft manufacturers. One example is the decreased size and cost for state of the art sunsensors. Integrated sensor systems are being designed which, through use of microsystem technology, are an order of magnitutde smaller than most current sunsensors and which hold due to the large reproducibility through batch manufacturing the promise of drastic price reduction. If the Commercial Of The Shelf (COTS) approach is adopted by satellite manufacturers, this will drastically decrease mass and cost budgets associated with sunsensing applications.

  3. Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Science.gov (United States)

    Ajiboye, Sola O.; Birch, Philip; Chatwin, Christopher; Young, Rupert

    2015-03-01

    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems `expose' relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT).

  4. Hybrid Exploration Agent Platform and Sensor Web System

    Science.gov (United States)

    Stoffel, A. William; VanSteenberg, Michael E.

    2004-01-01

    A sensor web to collect the scientific data needed to further exploration is a major and efficient asset to any exploration effort. This is true not only for lunar and planetary environments, but also for interplanetary and liquid environments. Such a system would also have myriad direct commercial spin-off applications. The Hybrid Exploration Agent Platform and Sensor Web or HEAP-SW like the ANTS concept is a Sensor Web concept. The HEAP-SW is conceptually and practically a very different system. HEAP-SW is applicable to any environment and a huge range of exploration tasks. It is a very robust, low cost, high return, solution to a complex problem. All of the technology for initial development and implementation is currently available. The HEAP Sensor Web or HEAP-SW consists of three major parts, The Hybrid Exploration Agent Platforms or HEAP, the Sensor Web or SW and the immobile Data collection and Uplink units or DU. The HEAP-SW as a whole will refer to any group of mobile agents or robots where each robot is a mobile data collection unit that spends most of its time acting in concert with all other robots, DUs in the web, and the HEAP-SWs overall Command and Control (CC) system. Each DU and robot is, however, capable of acting independently. The three parts of the HEAP-SW system are discussed in this paper. The Goals of the HEAP-SW system are: 1) To maximize the amount of exploration enhancing science data collected; 2) To minimize data loss due to system malfunctions; 3) To minimize or, possibly, eliminate the risk of total system failure; 4) To minimize the size, weight, and power requirements of each HEAP robot; 5) To minimize HEAP-SW system costs. The rest of this paper discusses how these goals are attained.

  5. Smart sensors and systems innovations for medical, environmental, and IoT applications

    CERN Document Server

    Yasuura, Hiroto; Liu, Yongpan; Lin, Youn-Long

    2017-01-01

    This book describes the technology used for effective sensing of our physical world and intelligent processing techniques for sensed information, which are essential to the success of Internet of Things (IoT). The authors provide a multidisciplinary view of sensor technology from materials, process, circuits, and big data domains and showcase smart sensor systems in real applications including smart home, transportation, medical, environmental, agricultural, etc. Unlike earlier books on sensors, this book provides a “global” view on smart sensors covering abstraction levels from device, circuit, systems, and algorithms. Profiles active research on smart sensors based on CMOS microelectronics; Describes applications of sensors and sensor systems in cyber physical systems, the social information infrastructure in our modern world; Includes coverage of a variety of related information technologies supporting the application of sensors; Discusses the integration of computation, networking, actuation, database...

  6. Validation of an Inertial Sensor System for Swing Analysis in Golf

    Directory of Open Access Journals (Sweden)

    Paul Lückemann

    2018-02-01

    Full Text Available Wearable inertial sensor systems are an upcoming tool for self-evaluation in sports, and can be used for swing analysis in golf. The aim of this work was to determine the validity and repeatability of an inertial sensor system attached to a player’s glove using a radar system as a reference. 20 subjects performed five full swings with each of three different clubs (wood, 7-iron, wedge. Clubhead speed was measured simultaneously by both sensor systems. Limits of Agreement were used to determine the accuracy and precision of the inertial sensor system. Results show that the inertial sensor system is quite accurate but with a lack of precision. Random error was quantified to approximately 17 km/h. The measurement error was dependent on the club type and was weakly negatively correlated to the magnitude of clubhead speed.

  7. Operation of remote mobile sensors for security of drinking water distribution systems.

    Science.gov (United States)

    Perelman, By Lina; Ostfeld, Avi

    2013-09-01

    The deployment of fixed online water quality sensors in water distribution systems has been recognized as one of the key components of contamination warning systems for securing public health. This study proposes to explore how the inclusion of mobile sensors for inline monitoring of various water quality parameters (e.g., residual chlorine, pH) can enhance water distribution system security. Mobile sensors equipped with sampling, sensing, data acquisition, wireless transmission and power generation systems are being designed, fabricated, and tested, and prototypes are expected to be released in the very near future. This study initiates the development of a theoretical framework for modeling mobile sensor movement in water distribution systems and integrating the sensory data collected from stationary and non-stationary sensor nodes to increase system security. The methodology is applied and demonstrated on two benchmark networks. Performance of different sensor network designs are compared for fixed and combined fixed and mobile sensor networks. Results indicate that complementing online sensor networks with inline monitoring can increase detection likelihood and decrease mean time to detection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Wearable PPG sensor based alertness scoring system.

    Science.gov (United States)

    Dey, Jishnu; Bhowmik, Tanmoy; Sahoo, Saswata; Tiwari, Vijay Narayan

    2017-07-01

    Quantifying mental alertness in today's world is important as it enables the person to adopt lifestyle changes for better work efficiency. Miniaturized sensors in wearable devices have facilitated detection/monitoring of mental alertness. Photoplethysmography (PPG) sensors through Heart Rate Variability (HRV) offer one such opportunity by providing information about one's daily alertness levels without requiring any manual interference from the user. In this paper, a smartwatch based alertness estimation system is proposed. Data collected from PPG sensor of smartwatch is processed and fed to machine learning based model to get a continuous alertness score. Utility functions are designed based on statistical analysis to give a quality score on different stages of alertness such as awake, long sleep and short duration power nap. An intelligent data collection approach is proposed in collaboration with the motion sensor in the smartwatch to reduce battery drainage. Overall, our proposed wearable based system provides a detailed analysis of alertness over a period in a systematic and optimized manner. We were able to achieve an accuracy of 80.1% for sleep/awake classification along with alertness score. This opens up the possibility for quantifying alertness levels using a single PPG sensor for better management of health related activities including sleep.

  9. Full-motion video analysis for improved gender classification

    Science.gov (United States)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  10. Video-based respiration monitoring with automatic region of interest detection

    NARCIS (Netherlands)

    Janssen, R.J.M.; Wang, Wenjin; Moço, A.; de Haan, G.

    2016-01-01

    Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration

  11. Wireless Sensor Network Based Smart Parking System

    Directory of Open Access Journals (Sweden)

    Jeffrey JOSEPH

    2014-01-01

    Full Text Available Ambient Intelligence is a vision in which various devices come together and process information from multiple sources in order to exert control on the physical environment. In addition to computation and control, communication plays a crucial role in the overall functionality of such a system. Wireless Sensor Networks are one such class of networks, which meet these criteria. These networks consist of spatially distributed sensor motes which work in a co-operative manner to sense and control the environment. In this work, an implementation of an energy-efficient and cost-effective, wireless sensor networks based vehicle parking system for a multi-floor indoor parking facility has been introduced. The system monitors the availability of free parking slots and guides the vehicle to the nearest free slot. The amount of time the vehicle has been parked is monitored for billing purposes. The status of the motes (dead/alive is also recorded. Information like slot allocated, directions to the slot and billing data is sent as a message to customer’s mobile phones. This paper extends our previous work 1 with the development of a low cost sensor mote, about one tenth the cost of a commercially available mote, keeping in mind the price sensitive markets of the developing countries.

  12. Video profile monitor diagnostic system for GTA

    International Nuclear Information System (INIS)

    Sandoval, D.P.; Garcia, R.C.; Gilpatrick, J.D.; Johnson, K.F.; Shinas, M.A.; Wright, R.; Yuan, V.; Zander, M.E.

    1992-01-01

    This paper describes a video diagnostic system used to measure the beam profile and position of the Ground Test Accelerator 2.5-MeV H - ion beam as it exits the intermediate matching section. Inelastic collisions between H-ions and residual nitrogen to fluoresce. The resulting light is captured through transport optics by an intensified CCD camera and is digitized. Real-time beam-profile images are displayed and stored for detailed analysis. Analyzed data showing resolutions for both position and profile measurements will also be presented

  13. Development of basic system for sensor calibration support in nuclear power plants

    International Nuclear Information System (INIS)

    Kusumi, Naohiro; Ohga, Yukiharu; Fukuda, Mitsuko; Ishizaki, Yuuichi; Koyama, Mikio; Maeda, Akihiko

    2004-01-01

    It is strongly desirable to reduce maintenance costs and shorten the time of periodic inspections in nuclear power plants. Therefore, it is important to reduce the amount of maintenance work during the inspection. In Japan, sensor calibration is usually performed at every periodic inspection, and the sensor calibration requires a large amount of work. A system for sensor calibration support has been developed to reduce sensor calibration work. The system is composed of two subsystems: a statistical analysis subsystem and a drift detection subsystem, as well as a human-machine interface, which offers support information. The statistical analysis subsystem supports the decision of the sensor calibration intervals based on the statistical analysis of sensor calibration data. There is the possibility that sensor drift increases beyond an allowance value before the sensor calibration intervals determined by the statistical analysis subsystem because of malfunctions, etc. To cope with this, the drift detection subsystem detects the sensor drift online during the plant operation. By combining the statistical analysis subsystem and the drift detection subsystem, a reliable sensor calibration support system is realized. The basic system composed of two subsystems was developed and evaluated using real plant data. The results showed that the sensor calibration intervals can be extended beyond current intervals and that the system is capable of detecting the sensor drift online. (author)

  14. A HOME-BASED MASSED PRACTICE SYSTEM FOR PEDIATRIC NEUROREHABILITATION

    Directory of Open Access Journals (Sweden)

    Yi-Ning Wu

    2013-11-01

    Full Text Available The objective of this paper is to introduce a novel low-cost human-computer interface (HCI system for home-based massed practice for children with upper limb impairment due to brain injury. Successful massed practice, a type of neurorehabilitation, may be of value for children with brain injury because it facilitates impaired limb use. Use of automated, home-based systems could provide a practical means for massed practice. However, the optimal strategy to deliver and monitor home-based massed practice is still unclear. We integrated motion sensor, video game, and HCI software technologies to create a useful home-based massed practice at targeted joints. The system records joint angle and number of movements using a low-cost custom hand-held sensor. The sensor acts as an input device to play video games. We demonstrated the system’s functionality and provided preliminary observations on usage by children with brain injury, including joint motion and muscle activation.

  15. The tsunami service bus, an integration platform for heterogeneous sensor systems

    Science.gov (United States)

    Haener, R.; Waechter, J.; Kriegel, U.; Fleischer, J.; Mueller, S.

    2009-04-01

    1. INTRODUCTION Early warning systems are long living and evolving: New sensor-systems and -types may be developed and deployed, sensors will be replaced or redeployed on other locations and the functionality of analyzing software will be improved. To ensure a continuous operability of those systems their architecture must be evolution-enabled. From a computer science point of view an evolution-enabled architecture must fulfill following criteria: • Encapsulation of and functionality on data in standardized services. Access to proprietary sensor data is only possible via these services. • Loose coupling of system constituents which easily can be achieved by implementing standardized interfaces. • Location transparency of services what means that services can be provided everywhere. • Separation of concerns that means breaking a system into distinct features which overlap in functionality as little as possible. A Service Oriented Architecture (SOA) as e. g. realized in the German Indonesian Tsunami Early Warning System (GITEWS) and the advantages of functional integration on the basis of services described below adopt these criteria best. 2. SENSOR INTEGRATION Integration of data from (distributed) data sources is just a standard task in computer science. From few well known solution patterns, taking into account performance and security requirements of early warning systems only functional integration should be considered. Precondition for this is that systems are realized compliant to SOA patterns. Functionality is realized in form of dedicated components communicating via a service infrastructure. These components provide their functionality in form of services via standardized and published interfaces which could be used to access data maintained in - and functionality provided by dedicated components. Functional integration replaces the tight coupling at data level by a dependency on loosely coupled services. If the interfaces of the service providing

  16. Video personalization for usage environment

    Science.gov (United States)

    Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.

    2002-07-01

    A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.

  17. HDR video synthesis for vision systems in dynamic scenes

    Science.gov (United States)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  18. AUTOMATIC FAST VIDEO OBJECT DETECTION AND TRACKING ON VIDEO SURVEILLANCE SYSTEM

    Directory of Open Access Journals (Sweden)

    V. Arunachalam

    2012-08-01

    Full Text Available This paper describes the advance techniques for object detection and tracking in video. Most visual surveillance systems start with motion detection. Motion detection methods attempt to locate connected regions of pixels that represent the moving objects within the scene; different approaches include frame-to-frame difference, background subtraction and motion analysis. The motion detection can be achieved by Principle Component Analysis (PCA and then separate an objects from background using background subtraction. The detected object can be segmented. Segmentation consists of two schemes: one for spatial segmentation and the other for temporal segmentation. Tracking approach can be done in each frame of detected Object. Pixel label problem can be alleviated by the MAP (Maximum a Posteriori technique.

  19. Research on MEMS sensor in hydraulic system flow detection

    Science.gov (United States)

    Zhang, Hongpeng; Zhang, Yindong; Liu, Dong; Ji, Yulong; Jiang, Jihai; Sun, Yuqing

    2011-05-01

    With the development of mechatronics technology and fault diagnosis theory, people regard flow information much more than before. Cheap, fast and accurate flow sensors are urgently needed by hydraulic industry. So MEMS sensor, which is small, low cost, well performed and easy to integrate, will surely play an important role in this field. Based on the new method of flow measurement which was put forward by our research group, this paper completed the measurement of flow rate in hydraulic system by setting up the mathematical model, using numerical simulation method and doing physical experiment. Based on viscous fluid flow equations we deduced differential pressure-velocity model of this new sensor and did optimization on parameters. Then, we designed and manufactured the throttle and studied the velocity and pressure field inside the sensor by FLUENT. Also in simulation we get the differential pressure-velocity curve .The model machine was simulated too to direct experiment. In the static experiments we calibrated the MEMS sensing element and built some sample sensors. Then in a hydraulic testing system we compared the sensor signal with a turbine meter. It presented good linearity and could meet general hydraulic system use. Based on the CFD curves, we analyzed the error reasons and made some suggestion to improve. In the dynamic test, we confirmed this sensor can realize high frequency flow detection by a 7 piston-pump.

  20. An Embedded Multi-Agent Systems Based Industrial Wireless Sensor Network.

    Science.gov (United States)

    Taboun, Mohammed S; Brennan, Robert W

    2017-09-14

    With the emergence of cyber-physical systems, there has been a growing interest in network-connected devices. One of the key requirements of a cyber-physical device is the ability to sense its environment. Wireless sensor networks are a widely-accepted solution for this requirement. In this study, an embedded multi-agent systems-managed wireless sensor network is presented. A novel architecture is proposed, along with a novel wireless sensor network architecture. Active and passive wireless sensor node types are defined, along with their communication protocols, and two application-specific examples are presented. A series of three experiments is conducted to evaluate the performance of the agent-embedded wireless sensor network.

  1. Design of Mine Ventilators Monitoring System Based on Wireless Sensor Network

    International Nuclear Information System (INIS)

    Fu Sheng; Song Haiqiang

    2012-01-01

    A monitoring system for a mine ventilator is designed based on ZigBee wireless sensor network technology in the paper. The system consists of a sink node, sensor nodes, industrial personal computer and several sensors. Sensor nodes communicate with the sink node through the ZigBee wireless sensor network. The sink node connects with the configuration software on the pc via serial port. The system can collect or calculate vibration, temperature, negative pressure, air volume and other information of the mine ventilator. Meanwhile the system accurately monitors operating condition of the ventilator through these parameters. Especially it provides the most original information for potential faults of the ventilator. Therefore, there is no doubt that it improves the efficiency of fault diagnosis.

  2. Design of Mine Ventilators Monitoring System Based on Wireless Sensor Network

    Science.gov (United States)

    Fu, Sheng; Song, Haiqiang

    2012-05-01

    A monitoring system for a mine ventilator is designed based on ZigBee wireless sensor network technology in the paper. The system consists of a sink node, sensor nodes, industrial personal computer and several sensors. Sensor nodes communicate with the sink node through the ZigBee wireless sensor network. The sink node connects with the configuration software on the pc via serial port. The system can collect or calculate vibration, temperature, negative pressure, air volume and other information of the mine ventilator. Meanwhile the system accurately monitors operating condition of the ventilator through these parameters. Especially it provides the most original information for potential faults of the ventilator. Therefore, there is no doubt that it improves the efficiency of fault diagnosis.

  3. Hybrid compression of video with graphics in DTV communication systems

    NARCIS (Netherlands)

    Schaar, van der M.; With, de P.H.N.

    2000-01-01

    Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an

  4. Video Retrieval Berdasarkan Teks dan Gambar

    Directory of Open Access Journals (Sweden)

    Rahmi Hidayati

    2013-01-01

    Abstract Retrieval video has been used to search a video based on the query entered by user which were text and image. This system could increase the searching ability on video browsing and expected to reduce the video’s retrieval time. The research purposes were designing and creating a software application of retrieval video based on the text and image on the video. The index process for the text is tokenizing, filtering (stopword, stemming. The results of stemming to saved in the text index table. Index process for the image is to create an image color histogram and compute the mean and standard deviation at each primary color red, green and blue (RGB of each image. The results of feature extraction is stored in the image table The process of video retrieval using the query text, images or both. To text query system to process the text query by looking at the text index tables. If there is a text query on the index table system will display information of the video according to the text query. To image query system to process the image query by finding the value of the feature extraction means red, green means, means blue, red standard deviation, standard deviation and standard deviation of blue green. If the value of the six features extracted query image on the index table image will display the video information system according to the query image. To query text and query images, the system will display the video information if the query text and query images have a relationship that is query text and query image has the same film title.   Keywords—  video, index, retrieval, text, image

  5. Video game-based neuromuscular electrical stimulation system for calf muscle training: a case study.

    Science.gov (United States)

    Sayenko, D G; Masani, K; Milosevic, M; Robinson, M F; Vette, A H; McConville, K M V; Popovic, M R

    2011-03-01

    A video game-based training system was designed to integrate neuromuscular electrical stimulation (NMES) and visual feedback as a means to improve strength and endurance of the lower leg muscles, and to increase the range of motion (ROM) of the ankle joints. The system allowed the participants to perform isotonic concentric and isometric contractions in both the plantarflexors and dorsiflexors using NMES. In the proposed system, the contractions were performed against exterior resistance, and the angle of the ankle joints was used as the control input to the video game. To test the practicality of the proposed system, an individual with chronic complete spinal cord injury (SCI) participated in the study. The system provided a progressive overload for the trained muscles, which is a prerequisite for successful muscle training. The participant indicated that he enjoyed the video game-based training and that he would like to continue the treatment. The results show that the training resulted in a significant improvement of the strength and endurance of the paralyzed lower leg muscles, and in an increased ROM of the ankle joints. Video game-based training programs might be effective in motivating participants to train more frequently and adhere to otherwise tedious training protocols. It is expected that such training will not only improve the properties of their muscles but also decrease the severity and frequency of secondary complications that result from SCI. Copyright © 2010 IPEM. All rights reserved.

  6. Heartbeat Signal from Facial Video for Biometric Recognition

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Different biometric traits such as face appearance and heartbeat signal from Electrocardiogram (ECG)/Phonocardiogram (PCG) are widely used in the human identity recognition. Recent advances in facial video based measurement of cardio-physiological parameters such as heartbeat rate, respiratory rate......, and blood volume pressure provide the possibility of extracting heartbeat signal from facial video instead of using obtrusive ECG or PCG sensors in the body. This paper proposes the Heartbeat Signal from Facial Video (HSFV) as a new biometric trait for human identity recognition, for the first time...... to the best of our knowledge. Feature extraction from the HSFV is accomplished by employing Radon transform on a waterfall model of the replicated HSFV. The pairwise Minkowski distances are obtained from the Radon image as the features. The authentication is accomplished by a decision tree based supervised...

  7. Compact Hip-Force Sensor for a Gait-Assistance Exoskeleton System

    Directory of Open Access Journals (Sweden)

    Hyundo Choi

    2018-02-01

    Full Text Available In this paper, we propose a compact force sensor system for a hip-mounted exoskeleton for seniors with difficulties in walking due to muscle weakness. It senses and monitors the delivered force and power of the exoskeleton for motion control and taking urgent safety action. Two FSR (force-sensitive resistors sensors are used to measure the assistance force when the user is walking. The sensor system directly measures the interaction force between the exoskeleton and the lower limb of the user instead of a previously reported force-sensing method, which estimated the hip assistance force from the current of the motor and lookup tables. Furthermore, the sensor system has the advantage of generating torque in the walking-assistant actuator based on directly measuring the hip-assistance force. Thus, the gait-assistance exoskeleton system can control the delivered power and torque to the user. The force sensing structure is designed to decouple the force caused by hip motion from other directional forces to the sensor so as to only measure that force. We confirmed that the hip-assistance force could be measured with the proposed prototype compact force sensor attached to a thigh frame through an experiment with a real system.

  8. Compact Hip-Force Sensor for a Gait-Assistance Exoskeleton System.

    Science.gov (United States)

    Choi, Hyundo; Seo, Keehong; Hyung, Seungyong; Shim, Youngbo; Lim, Soo-Chul

    2018-02-13

    In this paper, we propose a compact force sensor system for a hip-mounted exoskeleton for seniors with difficulties in walking due to muscle weakness. It senses and monitors the delivered force and power of the exoskeleton for motion control and taking urgent safety action. Two FSR (force-sensitive resistors) sensors are used to measure the assistance force when the user is walking. The sensor system directly measures the interaction force between the exoskeleton and the lower limb of the user instead of a previously reported force-sensing method, which estimated the hip assistance force from the current of the motor and lookup tables. Furthermore, the sensor system has the advantage of generating torque in the walking-assistant actuator based on directly measuring the hip-assistance force. Thus, the gait-assistance exoskeleton system can control the delivered power and torque to the user. The force sensing structure is designed to decouple the force caused by hip motion from other directional forces to the sensor so as to only measure that force. We confirmed that the hip-assistance force could be measured with the proposed prototype compact force sensor attached to a thigh frame through an experiment with a real system.

  9. A Survey of Wireless Sensor Network Based Air Pollution Monitoring Systems.

    Science.gov (United States)

    Yi, Wei Ying; Lo, Kin Ming; Mak, Terrence; Leung, Kwong Sak; Leung, Yee; Meng, Mei Ling

    2015-12-12

    The air quality in urban areas is a major concern in modern cities due to significant impacts of air pollution on public health, global environment, and worldwide economy. Recent studies reveal the importance of micro-level pollution information, including human personal exposure and acute exposure to air pollutants. A real-time system with high spatio-temporal resolution is essential because of the limited data availability and non-scalability of conventional air pollution monitoring systems. Currently, researchers focus on the concept of The Next Generation Air Pollution Monitoring System (TNGAPMS) and have achieved significant breakthroughs by utilizing the advance sensing technologies, MicroElectroMechanical Systems (MEMS) and Wireless Sensor Network (WSN). However, there exist potential problems of these newly proposed systems, namely the lack of 3D data acquisition ability and the flexibility of the sensor network. In this paper, we classify the existing works into three categories as Static Sensor Network (SSN), Community Sensor Network (CSN) and Vehicle Sensor Network (VSN) based on the carriers of the sensors. Comprehensive reviews and comparisons among these three types of sensor networks were also performed. Last but not least, we discuss the limitations of the existing works and conclude the objectives that we want to achieve in future systems.

  10. Sensor concentrator unit for the Continuous Automated Vault Inventory System

    Energy Technology Data Exchange (ETDEWEB)

    Nodine, R.N.; Lenarduzzi, R.

    1997-06-01

    The purpose of this document is to describe the use and operation of the sensor concentrator in the Continuous Automated Vault Inventory System (CAVIS). The CAVIS electronically verifies the presence of items of stored special nuclear material (SNM). US Department of Energy orders require that stored SNM be inventoried periodically to provide assurance that the material is secure. Currently this inventory is a highly manual activity, requiring personnel to enter the storage vaults. Using a CAVIS allows the frequency of physical inventories to be significantly reduced, resulting in substantial cost savings, increased security, and improved safety. The electronic inventory of stored SNM requires two different types of sensors for each item. The two sensors measure different parameters of the item, usually weight and gamma rays. A CAVIS is constructed using four basic system components: sensors, sensor concentrators, a data collection unit, and a database/user interface unit. One sensor concentrator supports the inventory of up to 20 items (40 sensors) and continuously takes readings from the item sensors. On request the sensor concentrator outputs the most recent sensor readings to the data collection unit. The information transfer takes place over a RS485 communications link. The data collection unit supports from 1 to 120 sensor concentrators (1 to 2,400 items) and is referred to as the Sensor Polling and Configuration System (SPCS). The SPCS is connected by a secure Transmission Control Protocol/Internet Protocol (TCP/IP) network to the database/user interface unit, which is referred to as the Graphical Facility Information Center (GraFIC). A CAVIS containing more than 2,400 items is supported by connecting additional SPCS units to the GraFIC.

  11. Sensor concentrator unit for the Continuous Automated Vault Inventory System

    International Nuclear Information System (INIS)

    Nodine, R.N.; Lenarduzzi, R.

    1997-06-01

    The purpose of this document is to describe the use and operation of the sensor concentrator in the Continuous Automated Vault Inventory System (CAVIS). The CAVIS electronically verifies the presence of items of stored special nuclear material (SNM). US Department of Energy orders require that stored SNM be inventoried periodically to provide assurance that the material is secure. Currently this inventory is a highly manual activity, requiring personnel to enter the storage vaults. Using a CAVIS allows the frequency of physical inventories to be significantly reduced, resulting in substantial cost savings, increased security, and improved safety. The electronic inventory of stored SNM requires two different types of sensors for each item. The two sensors measure different parameters of the item, usually weight and gamma rays. A CAVIS is constructed using four basic system components: sensors, sensor concentrators, a data collection unit, and a database/user interface unit. One sensor concentrator supports the inventory of up to 20 items (40 sensors) and continuously takes readings from the item sensors. On request the sensor concentrator outputs the most recent sensor readings to the data collection unit. The information transfer takes place over a RS485 communications link. The data collection unit supports from 1 to 120 sensor concentrators (1 to 2,400 items) and is referred to as the Sensor Polling and Configuration System (SPCS). The SPCS is connected by a secure Transmission Control Protocol/Internet Protocol (TCP/IP) network to the database/user interface unit, which is referred to as the Graphical Facility Information Center (GraFIC). A CAVIS containing more than 2,400 items is supported by connecting additional SPCS units to the GraFIC

  12. Evaluation of Distance Education System for Adult Education Using 4 Video Transmissions

    OpenAIRE

    渡部, 和雄; 湯瀬, 裕昭; 渡邉, 貴之; 井口, 真彦; 藤田, 広一

    2004-01-01

    The authors have developed a distance education system for interactive education which can transmit 4 video streams between distant lecture rooms. In this paper, we describe the results of our experiments using the system for adult education. We propose some efficient ways to use the system for adult education.

  13. Development of P4140 video data wall projector; Video data wall projector

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, H.; Inoue, H. [Toshiba Corp., Tokyo (Japan)

    1998-12-01

    The P4140 is a 3 cathode-ray tube (CRT) video data wall projector for super video graphics array (SVGA) signals. It is used as an image display unit, providing a large screen when several sets are put together. A high-quality picture has been realized by higher resolution and improved color uniformity technology. A new convergence adjustment system has also been developed through the optimal combination of digital and analog technologies. This video data wall installation has been greatly enhanced by the automation of cubes and cube performance settings. The P4140 video data wall projector can be used for displaying not only data but video as well. (author)

  14. An overview of recent end-to-end wireless medical video telemedicine systems using 3G.

    Science.gov (United States)

    Panayides, A; Pattichis, M S; Pattichis, C S; Schizas, C N; Spanias, A; Kyriacou, E

    2010-01-01

    Advances in video compression, network technologies, and computer technologies have contributed to the rapid growth of mobile health (m-health) systems and services. Wide deployment of such systems and services is expected in the near future, and it's foreseen that they will soon be incorporated in daily clinical practice. This study focuses in describing the basic components of an end-to-end wireless medical video telemedicine system, providing a brief overview of the recent advances in the field, while it also highlights future trends in the design of telemedicine systems that are diagnostically driven.

  15. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    Science.gov (United States)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    . Image processing at the sensor node level may also be required for applications in security, asset management and process control. Due to the data bandwidth requirements posed on the network by video sensors, new networking protocols or video extensions to existing standards (e.g. Zigbee) are required. To this end, Avaak has designed and implemented an ultra-low power networking protocol designed to carry large volumes of data through the network. The low power wireless sensor nodes that will be discussed include a chemical sensor integrated with a CMOS digital camera, a controller, a DSP processor and a radio communication transceiver, which enables relaying of an alarm or image message, to a central station. In addition to the communications, identification is very desirable; hence location awareness will be later incorporated to the system in the form of Time-Of-Arrival triangulation, via wide band signaling. While the wireless imaging kernel already exists specific applications for surveillance and chemical detection are under development by Avaak, as part of a co-founded program from ONR and DARPA. Avaak is also designing vision networks for commercial applications - some of which are undergoing initial field tests.

  16. A Printed Organic Amplification System for Wearable Potentiometric Electrochemical Sensors.

    Science.gov (United States)

    Shiwaku, Rei; Matsui, Hiroyuki; Nagamine, Kuniaki; Uematsu, Mayu; Mano, Taisei; Maruyama, Yuki; Nomura, Ayako; Tsuchiya, Kazuhiko; Hayasaka, Kazuma; Takeda, Yasunori; Fukuda, Takashi; Kumaki, Daisuke; Tokito, Shizuo

    2018-03-02

    Electrochemical sensor systems with integrated amplifier circuits play an important role in measuring physiological signals via in situ human perspiration analysis. Signal processing circuitry based on organic thin-film transistors (OTFTs) have significant potential in realizing wearable sensor devices due to their superior mechanical flexibility and biocompatibility. Here, we demonstrate a novel potentiometric electrochemical sensing system comprised of a potassium ion (K + ) sensor and amplifier circuits employing OTFT-based pseudo-CMOS inverters, which have a highly controllable switching voltage and closed-loop gain. The ion concentration sensitivity of the fabricated K + sensor was 34 mV/dec, which was amplified to 160 mV/dec (by a factor of 4.6) with high linearity. The developed system is expected to help further the realization of ultra-thin and flexible wearable sensor devices for healthcare applications.

  17. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  18. Wireless Zigbee strain gage sensor system for structural health monitoring

    Science.gov (United States)

    Ide, Hiroshi; Abdi, Frank; Miraj, Rashid; Dang, Chau; Takahashi, Tatsuya; Sauer, Bruce

    2009-05-01

    A compact cell phone size radio frequency (ZigBee) wireless strain measurement sensor system to measure the structural strain deformation was developed. The developed system provides an accurate strain measurement data stream to the Internet for further Diagnostic and Prognostic (DPS) correlation. Existing methods of structural measurement by strain sensors (gauges) do not completely satisfy problems posed by continuous structural health monitoring. The need for efficient health monitoring methods with real-time requirements to bidirectional data flow from sensors and to a commanding device is becoming critical for keeping our daily life safety. The use of full-field strain measurement techniques could reduce costly experimental programs through better understanding of material behavior. Wireless sensor-network technology is a monitoring method that is estimated to grow rapidly providing potential for cost savings over traditional wired sensors. The many of currently available wireless monitoring methods have: the proactive and constant data rate character of the data streams rather than traditional reactive, event-driven data delivery; mostly static node placement on structures with limited number of nodes. Alpha STAR Electronics' wireless sensor network system, ASWN, addresses some of these deficiencies, making the system easier to operate. The ASWN strain measurement system utilizes off-the-shelf sensors, namely strain gauges, with an analog-to-digital converter/amplifier and ZigBee radio chips to keep cost lower. Strain data is captured by the sensor, converted to digital form and delivered to the ZigBee radio chip, which in turn broadcasts the information using wireless protocols to a Personal Data Assistant (PDA) or Laptop/Desktop computers. From here, data is forwarded to remote computers for higher-level analysis and feedback using traditional cellular and satellite communication or the Ethernet infrastructure. This system offers a compact size, lower cost

  19. Sensor Systems for Vehicle Environment Perception in a Highway Intelligent Space System

    Science.gov (United States)

    Tang, Xiaofeng; Gao, Feng; Xu, Guoyan; Ding, Nenggen; Cai, Yao; Ma, Mingming; Liu, Jianxing

    2014-01-01

    A Highway Intelligent Space System (HISS) is proposed to study vehicle environment perception in this paper. The nature of HISS is that a space sensors system using laser, ultrasonic or radar sensors are installed in a highway environment and communication technology is used to realize the information exchange between the HISS server and vehicles, which provides vehicles with the surrounding road information. Considering the high-speed feature of vehicles on highways, when vehicles will be passing a road ahead that is prone to accidents, the vehicle driving state should be predicted to ensure drivers have road environment perception information in advance, thereby ensuring vehicle driving safety and stability. In order to verify the accuracy and feasibility of the HISS, a traditional vehicle-mounted sensor system for environment perception is used to obtain the relative driving state. Furthermore, an inter-vehicle dynamics model is built and model predictive control approach is used to predict the driving state in the following period. Finally, the simulation results shows that using the HISS for environment perception can arrive at the same results detected by a traditional vehicle-mounted sensors system. Meanwhile, we can further draw the conclusion that using HISS to realize vehicle environment perception can ensure system stability, thereby demonstrating the method's feasibility. PMID:24834907

  20. Tank Monitor and Control System sensor acceptance test procedure. Revision 5

    International Nuclear Information System (INIS)

    Scaief, C.C. III.

    1994-01-01

    The purpose of this acceptance test procedure (ATP) is to verify the correct reading of sensor elements connected to the Tank Monitor and Control System (TMACS). This ATP is intended to be used for testing of the connection of existing temperature sensors, new temperature sensors, pressure sensing equipment, new Engraf level gauges, sensors that generate a current output, and discrete (on/off) inputs. It is intended that this ATP will be used each time sensors are added to the system. As a result, the data sheets have been designed to be generic. The TMACS has been designed in response to recommendations from the Defense Nuclear Facilities Safety Board primarily for improved monitoring of waste tank temperatures. The system has been designed with the capability to monitor other types of sensor input as well

  1. Intelligent Chemical Sensor Systems for In-space Safety Applications

    Science.gov (United States)

    Hunter, G. W.; Xu, J. C.; Neudeck, P. G.; Makel, D. B.; Ward, B.; Liu, C. C.

    2006-01-01

    Future in-space and lunar operations will require significantly improved monitoring and Integrated System Health Management (ISHM) throughout the mission. In particular, the monitoring of chemical species is an important component of an overall monitoring system for space vehicles and operations. For example, in leak monitoring of propulsion systems during launch, inspace, and on lunar surfaces, detection of low concentrations of hydrogen and other fuels is important to avoid explosive conditions that could harm personnel and damage the vehicle. Dependable vehicle operation also depends on the timely and accurate measurement of these leaks. Thus, the development of a sensor array to determine the concentration of fuels such as hydrogen, hydrocarbons, or hydrazine as well as oxygen is necessary. Work has been on-going to develop an integrated smart leak detection system based on miniaturized sensors to detect hydrogen, hydrocarbons, or hydrazine, and oxygen. The approach is to implement Microelectromechanical Systems (MEMS) based sensors incorporated with signal conditioning electronics, power, data storage, and telemetry enabling intelligent systems. The final sensor system will be self-contained with a surface area comparable to a postage stamp. This paper discusses the development of this "Lick and Stick" leak detection system and it s application to In-Space Transportation and other Exploration applications.

  2. Sensor Buoy System for Monitoring Renewable Marine Energy Resources.

    Science.gov (United States)

    García, Emilio; Quiles, Eduardo; Correcher, Antonio; Morant, Francisco

    2018-03-22

    In this paper we present a multi-sensor floating system designed to monitor marine energy parameters, in order to sample wind, wave, and marine current energy resources. For this purpose, a set of dedicated sensors to measure the height and period of the waves, wind, and marine current intensity and direction have been selected and installed in the system. The floating device incorporates wind and marine current turbines for renewable energy self-consumption and to carry out complementary studies on the stability of such a system. The feasibility, safety, sensor communications, and buoy stability of the floating device have been successfully checked in real operating conditions.

  3. Optimal use of video for teaching the practical implications of studying business information systems

    DEFF Research Database (Denmark)

    Fog, Benedikte; Ulfkjær, Jacob Kanneworff Stigsen; Schlichter, Bjarne Rerup

    that video should be introduced early during a course to prevent students’ misconceptions of working with business information systems, as well as to increase motivation and comprehension within the academic area. It is also considered of importance to have a trustworthy person explaining the practical......The study of business information systems has become increasingly important in the Digital Economy. However, it has been found that students have difficulties understanding the practical implications thereof and this leads to a motivational decreases. This study aims to investigate how to optimize...... not sufficiently reflect the theoretical recommendations of using video optimally in a management education. It did not comply with the video learning sequence as introduced by Marx and Frost (1998). However, it questions if the level of cognitive orientation activities can become too extensive. It finds...

  4. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  5. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  6. Underwater Animal Monitoring Magnetic Sensor System

    KAUST Repository

    Kaidarova, Altynay

    2017-10-01

    Obtaining new insights into the behavior of free-living marine organisms is fundamental for conservation efforts and anticipating the impact of climate change on marine ecosystems. Despite the recent advances in biotelemetry, collecting physiological and behavioral parameters of underwater free-living animals remains technically challenging. In this thesis, we develop the first magnetic underwater animal monitoring system that utilizes Tunnel magnetoresistance (TMR) sensors, the most sensitive solid-state sensors today, coupled with flexible magnetic composites. The TMR sensors are composed of CoFeB free layers and MgO tunnel barriers, patterned using standard optical lithography and ion milling procedures. The short and long-term stability of the TMR sensors has been studied using statistical and Allan deviation analysis. Instrumentation noise has been reduced using optimized electrical interconnection schemes. We also develop flexible NdFeB-PDMS composite magnets optimized for applications in corrosive marine environments, and which can be attached to marine animals. The magnetic and mechanical properties are studied for different NdFeB powder concentrations and the performance of the magnetic composites for different exposure times to sea water is systematically investigated. Without protective layer, the composite magnets loose more than 50% of their magnetization after 51 days in seawater. The durability of the composite magnets can be considerably improved by using polymer coatings which are protecting the composite magnet, whereby Parylene C is found to be the most effective solution, providing simultaneously corrosion resistance, flexibility, and enhanced biocompatibility. A Parylene C film of 2μm thickness provides the sufficient protection of the magnetic composite in corrosive aqueous environments for more than 70 days. For the high level performance of the system, the theoretically optimal position of the composite magnets with respect to the sensing

  7. A Smart Sensor Data Transmission Technique for Logistics and Intelligent Transportation Systems

    OpenAIRE

    Kyunghee Sun; Intae Ryoo

    2018-01-01

    When it comes to Internet of Things systems that include both a logistics system and an intelligent transportation system, a smart sensor is one of the key elements to collect useful information whenever and wherever necessary. This study proposes the Smart Sensor Node Group Management Medium Access Control Scheme designed to group smart sensor devices and collect data from them efficiently. The proposed scheme performs grouping of portable sensor devices connected to a system depending on th...

  8. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1994-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program

  9. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  10. An Embedded Multi-Agent Systems Based Industrial Wireless Sensor Network

    Science.gov (United States)

    Brennan, Robert W.

    2017-01-01

    With the emergence of cyber-physical systems, there has been a growing interest in network-connected devices. One of the key requirements of a cyber-physical device is the ability to sense its environment. Wireless sensor networks are a widely-accepted solution for this requirement. In this study, an embedded multi-agent systems-managed wireless sensor network is presented. A novel architecture is proposed, along with a novel wireless sensor network architecture. Active and passive wireless sensor node types are defined, along with their communication protocols, and two application-specific examples are presented. A series of three experiments is conducted to evaluate the performance of the agent-embedded wireless sensor network. PMID:28906452

  11. Handbook of sensor networks compact wireless and wired sensing systems

    CERN Document Server

    Ilyas, Mohammad

    2004-01-01

    INTRODUCTION Opportunities and Challenges in Wireless Sensor Networks, M. Haenggi, Next Generation Technologies to Enable Sensor Networks, J. I.  Goodman, A. I. Reuther, and D. R. Martinez Sensor Networks Management, L. B. Ruiz, J. M. Nogueira, and A. A. F. Loureiro Models for Programmability in Sensor Networks, A. Boulis Miniaturizing Sensor Networks with MEMS, Brett Warneke A Taxonomy of Routing Techniques in Wireless Sensor Networks, J. N. Al-Karaki and A. E. Kamal Artificial Perceptual Systems, A. Loutfi, M. Lindquist, and P. Wide APPLICATIONS Sensor Network Architecture and Appl

  12. Novel Wireless Sensor System for Dynamic Characterization of Borehole Heat Exchangers

    Directory of Open Access Journals (Sweden)

    Raimundo García-Olcina

    2011-07-01

    Full Text Available The design and field test of a novel sensor system based in autonomous wireless sensors to measure the temperature of the heat transfer fluid along a borehole heat exchanger (BHE is presented. The system, by means of two specials valves, inserts and extracts miniaturized wireless sensors inside the pipes of the borehole, which are carried by the thermal fluid. Each sensor is embedded in a small sphere of just 25 mm diameter and 8 gr weight, containing a transceiver, a microcontroller, a temperature sensor and a power supply. A wireless data processing unit transmits to the sensors the acquisition configuration before the measurements, and also downloads the temperature data measured by the sensor along its way through the BHE U-tube. This sensor system is intended to improve the conventional thermal response test (TRT and it allows the collection of information about the thermal characteristics of the geological structure of subsurface and its influence in borehole thermal behaviour, which in turn, facilitates the implementation of TRTs in a more cost-effective and reliable way.

  13. Novel wireless sensor system for dynamic characterization of borehole heat exchangers.

    Science.gov (United States)

    Martos, Julio; Montero, Álvaro; Torres, José; Soret, Jesús; Martínez, Guillermo; García-Olcina, Raimundo

    2011-01-01

    The design and field test of a novel sensor system based in autonomous wireless sensors to measure the temperature of the heat transfer fluid along a borehole heat exchanger (BHE) is presented. The system, by means of two special valves, inserts and extracts miniaturized wireless sensors inside the pipes of the borehole, which are carried by the thermal fluid. Each sensor is embedded in a small sphere of just 25 mm diameter and 8 gr weight, containing a transceiver, a microcontroller, a temperature sensor and a power supply. A wireless data processing unit transmits to the sensors the acquisition configuration before the measurements, and also downloads the temperature data measured by the sensor along its way through the BHE U-tube. This sensor system is intended to improve the conventional thermal response test (TRT) and it allows the collection of information about the thermal characteristics of the geological structure of subsurface and its influence in borehole thermal behaviour, which in turn, facilitates the implementation of TRTs in a more cost-effective and reliable way.

  14. Development of an emergency medical video multiplexing transport system. Aiming at the nation wide prehospital care on ambulance.

    Science.gov (United States)

    Nagatuma, Hideaki

    2003-04-01

    The Emergency Medical Video Multiplexing Transport System (EMTS) is designed to support prehospital cares by delivering high quality live video streams of patients in an ambulance to emergency doctors in a remote hospital via satellite communications. The important feature is that EMTS divides a patient's live video scene into four pieces and transports the four video streams on four separate network channels. By multiplexing four video streams, EMTS is able to transport high quality videos through low data transmission rate networks such as satellite communications and cellular phone networks. In order to transport live video streams constantly, EMTS adopts Real-time Transport Protocol/Real-time Control Protocol as a network protocol and video stream data are compressed by Moving Picture Experts Group 4 format. As EMTS combines four video streams with checking video frame numbers, it uses a refresh packet that initializes server's frame numbers to synchronize the four video streams.

  15. Learning neuroendoscopy with an exoscope system (video telescopic operating monitor): Early clinical results.

    Science.gov (United States)

    Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya

    2016-01-01

    Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.

  16. Video Feedforward for Rapid Learning of a Picture-Based Communication System

    Science.gov (United States)

    Smith, Jemma; Hand, Linda; Dowrick, Peter W.

    2014-01-01

    This study examined the efficacy of video self modeling (VSM) using feedforward, to teach various goals of a picture exchange communication system (PECS). The participants were two boys with autism and one man with Down syndrome. All three participants were non-verbal with no current functional system of communication; the two children had long…

  17. Towards a sensor for detecting human presence and activity

    OpenAIRE

    Benezeth , Yannick; Laurent , Hélène; Emile , Bruno; Rosenberger , Christophe

    2011-01-01

    International audience; In this paper, we propose a vision-based system for human detection and tracking in indoor environment allowing to collect higher level information on people activity. The developed presence sensor based on video analysis, using a static camera is ¯rst of all presented. Composed of three main steps, the ¯rst one consists in change detection using a background model updated at di®erent levels to manage the most common variations of the environment. A moving objects trac...

  18. A Mobile Sensor Network System for Monitoring of Unfriendly Environments.

    Science.gov (United States)

    Song, Guangming; Zhou, Yaoxin; Ding, Fei; Song, Aiguo

    2008-11-14

    Observing microclimate changes is one of the most popular applications of wireless sensor networks. However, some target environments are often too dangerous or inaccessible to humans or large robots and there are many challenges for deploying and maintaining wireless sensor networks in those unfriendly environments. This paper presents a mobile sensor network system for solving this problem. The system architecture, the mobile node design, the basic behaviors and advanced network capabilities have been investigated respectively. A wheel-based robotic node architecture is proposed here that can add controlled mobility to wireless sensor networks. A testbed including some prototype nodes has also been created for validating the basic functions of the proposed mobile sensor network system. Motion performance tests have been done to get the positioning errors and power consumption model of the mobile nodes. Results of the autonomous deployment experiment show that the mobile nodes can be distributed evenly into the previously unknown environments. It provides powerful support for network deployment and maintenance and can ensure that the sensor network will work properly in unfriendly environments.

  19. A Novel Attitude Determination System Aided by Polarization Sensor

    Directory of Open Access Journals (Sweden)

    Wei Zhi

    2018-01-01

    Full Text Available This paper aims to develop a novel attitude determination system aided by polarization sensor. An improved heading angle function is derived using the perpendicular relationship between directions of E-vector of linearly polarized light and solar vector in the atmospheric polarization distribution model. The Extended Kalman filter (EKF with quaternion differential equation as a dynamic model is applied to fuse the data from sensors. The covariance functions of filter process and measurement noises are deduced in detail. The indoor and outdoor tests are conducted to verify the validity and feasibility of proposed attitude determination system. The test results showed that polarization sensor is not affected by magnetic field, thus the proposed system can work properly in environments containing the magnetic interference. The results also showed that proposed system has higher measurement accuracy than common attitude determination system and can provide precise parameters for Unmanned Aerial Vehicle (UAV flight control. The main contribution of this paper is implementation of the EKF for incorporating the self-developed polarization sensor into the conventional attitude determination system. The real-world experiment with the quad-rotor proved that proposed system can work in a magnetic interference environment and provide sufficient accuracy in attitude determination for autonomous navigation of vehicle.

  20. A Novel Attitude Determination System Aided by Polarization Sensor.

    Science.gov (United States)

    Zhi, Wei; Chu, Jinkui; Li, Jinshan; Wang, Yinlong

    2018-01-09

    This paper aims to develop a novel attitude determination system aided by polarization sensor. An improved heading angle function is derived using the perpendicular relationship between directions of E-vector of linearly polarized light and solar vector in the atmospheric polarization distribution model. The Extended Kalman filter (EKF) with quaternion differential equation as a dynamic model is applied to fuse the data from sensors. The covariance functions of filter process and measurement noises are deduced in detail. The indoor and outdoor tests are conducted to verify the validity and feasibility of proposed attitude determination system. The test results showed that polarization sensor is not affected by magnetic field, thus the proposed system can work properly in environments containing the magnetic interference. The results also showed that proposed system has higher measurement accuracy than common attitude determination system and can provide precise parameters for Unmanned Aerial Vehicle (UAV) flight control. The main contribution of this paper is implementation of the EKF for incorporating the self-developed polarization sensor into the conventional attitude determination system. The real-world experiment with the quad-rotor proved that proposed system can work in a magnetic interference environment and provide sufficient accuracy in attitude determination for autonomous navigation of vehicle.