WorldWideScience

Sample records for video event detection

  1. Encoding Concept Prototypes for Video Event Detection and Summarization

    NARCIS (Netherlands)

    Mazloom, M.; Habibian, A.; Liu, D.; Snoek, C.G.M.; Chang, S.F.

    2015-01-01

    This paper proposes a new semantic video representation for few and zero example event detection and unsupervised video event summarization. Different from existing works, which obtain a semantic representation by training concepts over images or entire video clips, we propose an algorithm that

  2. Learning Latent Super-Events to Detect Multiple Activities in Videos

    OpenAIRE

    Piergiovanni, AJ; Ryoo, Michael S.

    2017-01-01

    In this paper, we introduce the concept of learning latent \\emph{super-events} from activity videos, and present how it benefits activity detection in continuous videos. We define a super-event as a set of multiple events occurring together in videos with a particular temporal organization; it is the opposite concept of sub-events. Real-world videos contain multiple activities and are rarely segmented (e.g., surveillance videos), and learning latent super-events allows the model to capture ho...

  3. The ImageNet Shuffle: Reorganized Pre-training for Video Event Detection

    NARCIS (Netherlands)

    Mettes, P.; Koelma, D.C.; Snoek, C.G.M.

    2016-01-01

    This paper strives for video event detection using a representation learned from deep convolutional neural networks. Different from the leading approaches, who all learn from the 1,000 classes defined in the ImageNet Large Scale Visual Recognition Challenge, we investigate how to leverage the

  4. A Macro-Observation Scheme for Abnormal Event Detection in Daily-Life Video Sequences

    Directory of Open Access Journals (Sweden)

    Chiu Wei-Yao

    2010-01-01

    Full Text Available Abstract We propose a macro-observation scheme for abnormal event detection in daily life. The proposed macro-observation representation records the time-space energy of motions of all moving objects in a scene without segmenting individual object parts. The energy history of each pixel in the scene is instantly updated with exponential weights without explicitly specifying the duration of each activity. Since possible activities in daily life are numerous and distinct from each other and not all abnormal events can be foreseen, images from a video sequence that spans sufficient repetition of normal day-to-day activities are first randomly sampled. A constrained clustering model is proposed to partition the sampled images into groups. The new observed event that has distinct distance from any of the cluster centroids is then classified as an anomaly. The proposed method has been evaluated in daily work of a laboratory and BEHAVE benchmark dataset. The experimental results reveal that it can well detect abnormal events such as burglary and fighting as long as they last for a sufficient duration of time. The proposed method can be used as a support system for the scene that requires full time monitoring personnel.

  5. Detection and Separation of Speech Event Using Audio and Video Information Fusion and Its Application to Robust Speech Interface

    Directory of Open Access Journals (Sweden)

    Futoshi Asano

    2004-09-01

    Full Text Available A method of detecting speech events in a multiple-sound-source condition using audio and video information is proposed. For detecting speech events, sound localization using a microphone array and human tracking by stereo vision is combined by a Bayesian network. From the inference results of the Bayesian network, information on the time and location of speech events can be known. The information on the detected speech events is then utilized in the robust speech interface. A maximum likelihood adaptive beamformer is employed as a preprocessor of the speech recognizer to separate the speech signal from environmental noise. The coefficients of the beamformer are kept updated based on the information of the speech events. The information on the speech events is also used by the speech recognizer for extracting the speech segment.

  6. Compositional Models for Video Event Detection: A Multiple Kernel Learning Latent Variable Approach (Open Access)

    Science.gov (United States)

    2014-03-03

    cate- gory in Fig. 1. This video contains segments focusing on the snowboard , the person jumping, is shot in an outdoor, ski-resort scene, and has fast... snowboard trick, but is unlikely to include all three. Grouping segments into their relevant scene types can improve recognition. Fi- nally, the model must

  7. In search of video event semantics

    NARCIS (Netherlands)

    Mazloom, M.

    2016-01-01

    In this thesis we aim to represent an event in a video using semantic features. We start from a bank of concept detectors for representing events in video. At first we considered the relevance of concepts to the event inside the video representation. We address the problem of video event

  8. Anomaly detection driven active learning for identifying suspicious tracks and events in WAMI video

    Science.gov (United States)

    Miller, David J.; Natraj, Aditya; Hockenbury, Ryler; Dunn, Katherine; Sheffler, Michael; Sullivan, Kevin

    2012-06-01

    We describe a comprehensive system for learning to identify suspicious vehicle tracks from wide-area motion (WAMI) video. First, since the road network for the scene of interest is assumed unknown, agglomerative hierarchical clustering is applied to all spatial vehicle measurements, resulting in spatial cells that largely capture individual road segments. Next, for each track, both at the cell (speed, acceleration, azimuth) and track (range, total distance, duration) levels, extreme value feature statistics are both computed and aggregated, to form summary (p-value based) anomaly statistics for each track. Here, to fairly evaluate tracks that travel across different numbers of spatial cells, for each cell-level feature type, a single (most extreme) statistic is chosen, over all cells traveled. Finally, a novel active learning paradigm, applied to a (logistic regression) track classifier, is invoked to learn to distinguish suspicious from merely anomalous tracks, starting from anomaly-ranked track prioritization, with ground-truth labeling by a human operator. This system has been applied to WAMI video data (ARGUS), with the tracks automatically extracted by a system developed in-house at Toyon Research Corporation. Our system gives promising preliminary results in highly ranking as suspicious aerial vehicles, dismounts, and traffic violators, and in learning which features are most indicative of suspicious tracks.

  9. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...... in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models....

  10. Video fingerprinting for live events

    Science.gov (United States)

    Celik, Mehmet; Haitsma, Jaap; Barvinko, Pavlo; Langelaar, Gerhard; Maas, Martijn

    2009-02-01

    Multimedia fingerprinting (robust hashing) as a content identification technology is emerging as an effective tool for preventing unauthorized distribution of commercial content through user generated content (UGC) sites. Research in the field has mainly considered content types with slow distribution cycles, e.g. feature films, for which reference fingerprint ingestion and database indexing can be performed offline. As a result, research focus has been on improving the robustness and search speed. Live events, such as live sports broadcasts, impose new challenges on a fingerprinting system. For instance, highlights from a soccer match are often available-and viewed-on UGC sites well before the end of the match. In this scenario, the fingerprinting system should be able to ingest and index live content online and offer continuous search capability, where new material is identifiable within minutes of broadcast. In this paper, we concentrate on algorithmic and architectural challenges we faced when developing a video fingerprinting solution for live events. In particular, we discuss how to effectively utilize fast sorting algorithms and a master-slave architecture for fast and continuous ingestion of live broadcasts.

  11. Highlight detection for video content analysis through double filters

    Science.gov (United States)

    Sun, Zhonghua; Chen, Hexin; Chen, Mianshu

    2005-07-01

    Highlight detection is a form of video summarization techniques aiming at including the most expressive or attracting parts in the video. Most video highlights selection research work has been performed on sports video, detecting certain objects or events such as goals in soccer video, touch down in football and others. In this paper, we present a highlight detection method for film video. Highlight section in a film video is not like that in sports video that usually has certain objects or events. The methods to determine a highlight part in a film video can exhibit as three aspects: (a) locating obvious audio event, (b) detecting expressive visual content around the obvious audio location, (c) selecting the preferred portion of the extracted audio-visual highlight segments. We define a double filters model to detect the potential highlights in video. First obvious audio location is determined through filtering the obvious audio features, and then we perform the potential visual salience detection around the potential audio highlight location. Finally the production from the audio-visual double filters is compared with a preference threshold to determine the final highlights. The user study results indicate that the double filters detection approach is an effective method for highlight detection for video content analysis.

  12. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  13. Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.

    Science.gov (United States)

    Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao

    2016-12-01

    In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.

  14. Detection of solar events

    Science.gov (United States)

    Fischbach, Ephraim; Jenkins, Jere

    2013-08-27

    A flux detection apparatus can include a radioactive sample having a decay rate capable of changing in response to interaction with a first particle or a field, and a detector associated with the radioactive sample. The detector is responsive to a second particle or radiation formed by decay of the radioactive sample. The rate of decay of the radioactive sample can be correlated to flux of the first particle or the field. Detection of the first particle or the field can provide an early warning for an impending solar event.

  15. Recommendations for Recognizing Video Events by Concept Vocabularies

    NARCIS (Netherlands)

    Habibian, A.; Snoek, C.G.M.

    2014-01-01

    Representing videos using vocabularies composed of concept detectors appears promising for generic event recognition. While many have recently shown the benefits of concept vocabularies for recognition, studying the characteristics of a universal concept vocabulary suited for representing events is

  16. Indexing Motion Detection Data for Surveillance Video

    DEFF Research Database (Denmark)

    Vind, Søren Juhl; Bille, Philip; Gørtz, Inge Li

    2014-01-01

    We show how to compactly index video data to support fast motion detection queries. A query specifies a time interval T, a area A in the video and two thresholds v and p. The answer to a query is a list of timestamps in T where ≥ p% of A has changed by ≥ v values. Our results show that by building...... a small index, we can support queries with a speedup of two to three orders of magnitude compared to motion detection without an index. For high resolution video, the index size is about 20% of the compressed video size....

  17. A video event trigger for high frame rate, high resolution video technology

    Science.gov (United States)

    Williams, Glenn L.

    1991-01-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  18. Improving video event retrieval by user feedback

    NARCIS (Netherlands)

    Boer, M. de; Pingen, G.; Knook, D.; Schutte, K.; Kraaij, W.

    2017-01-01

    In content based video retrieval videos are often indexed with semantic labels (concepts) using pre-trained classifiers. These pre-trained classifiers (concept detectors), are not perfect, and thus the labels are noisy. Additionally, the amount of pre-trained classifiers is limited. Often automatic

  19. Video behavior profiling for anomaly detection.

    Science.gov (United States)

    Xiang, Tao; Gong, Shaogang

    2008-05-01

    This paper aims to address the problem of modelling video behaviour captured in surveillancevideos for the applications of online normal behaviour recognition and anomaly detection. A novelframework is developed for automatic behaviour profiling and online anomaly sampling/detectionwithout any manual labelling of the training dataset. The framework consists of the followingkey components: (1) A compact and effective behaviour representation method is developed basedon discrete scene event detection. The similarity between behaviour patterns are measured basedon modelling each pattern using a Dynamic Bayesian Network (DBN). (2) Natural grouping ofbehaviour patterns is discovered through a novel spectral clustering algorithm with unsupervisedmodel selection and feature selection on the eigenvectors of a normalised affinity matrix. (3) Acomposite generative behaviour model is constructed which is capable of generalising from asmall training set to accommodate variations in unseen normal behaviour patterns. (4) A run-timeaccumulative anomaly measure is introduced to detect abnormal behaviour while normal behaviourpatterns are recognised when sufficient visual evidence has become available based on an onlineLikelihood Ratio Test (LRT) method. This ensures robust and reliable anomaly detection and normalbehaviour recognition at the shortest possible time. The effectiveness and robustness of our approachis demonstrated through experiments using noisy and sparse datasets collected from both indoorand outdoor surveillance scenarios. In particular, it is shown that a behaviour model trained usingan unlabelled dataset is superior to those trained using the same but labelled dataset in detectinganomaly from an unseen video. The experiments also suggest that our online LRT based behaviourrecognition approach is advantageous over the commonly used Maximum Likelihood (ML) methodin differentiating ambiguities among different behaviour classes observed online.

  20. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video.

    Science.gov (United States)

    Lee, Gil-Beom; Lee, Myeong-Jin; Lee, Woo-Kyung; Park, Joo-Heon; Kim, Tae-Hwan

    2017-03-22

    Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object's vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  1. Video enhancement effectiveness for target detection

    Science.gov (United States)

    Simon, Michael; Fischer, Amber; Petrov, Plamen

    2011-05-01

    Unmanned aerial vehicles (UAVs) capture real-time video data of military targets while keeping the warfighter at a safe distance. This keeps soldiers out of harm's way while they perform intelligence, surveillance and reconnaissance (ISR) and close-air support troops in contact (CAS-TIC) situations. The military also wants to use UAV video to achieve force multiplication. One method of achieving effective force multiplication involves fielding numerous UAVs with cameras and having multiple videos processed simultaneously by a single operator. However, monitoring multiple video streams is difficult for operators when the videos are of low quality. To address this challenge, we researched several promising video enhancement algorithms that focus on improving video quality. In this paper, we discuss our video enhancement suite and provide examples of video enhancement capabilities, focusing on stabilization, dehazing, and denoising. We provide results that show the effects of our enhancement algorithms on target detection and tracking algorithms. These results indicate that there is potential to assist the operator in identifying and tracking relevant targets with aided target recognition even on difficult video, increasing the force multiplier effect of UAVs. This work also forms the basis for human factors research into the effects of enhancement algorithms on ISR missions.

  2. Video analysis of motor events in REM sleep behavior disorder.

    Science.gov (United States)

    Frauscher, Birgit; Gschliesser, Viola; Brandauer, Elisabeth; Ulmer, Hanno; Peralta, Cecilia M; Müller, Jörg; Poewe, Werner; Högl, Birgit

    2007-07-30

    In REM sleep behavior disorder (RBD), several studies focused on electromyographic characterization of motor activity, whereas video analysis has remained more general. The aim of this study was to undertake a detailed and systematic video analysis. Nine polysomnographic records from 5 Parkinson patients with RBD were analyzed and compared with sex- and age-matched controls. Each motor event in the video during REM sleep was classified according to duration, type of movement, and topographical distribution. In RBD, a mean of 54 +/- 23.2 events/10 minutes of REM sleep (total 1392) were identified and visually analyzed. Seventy-five percent of all motor events lasted Disorder Society

  3. MediaMill at TRECVID 2014: Searching Concepts, Objects, Instances and Events in Video

    NARCIS (Netherlands)

    Snoek, C.G.M.; van de Sande, K.E.A.; Fontijne, D.; Cappallo, S.; van Gemert, J.; Habibian, A.; Mensink, T.; Mettes, P.; Tao, R.; Koelma, D.C.; Smeulders, A.W.M.

    2014-01-01

    In this paper we summarize our TRECVID 2014 video retrieval experiments. The MediaMill team participated in five tasks: concept detection, object localization, instance search, event recognition and recounting. We experimented with concept detection using deep learning and color difference coding,

  4. Paroxysmal events during prolonged video-video electroencephalography monitoring in refractory epilepsy.

    Science.gov (United States)

    Sanabria-Castro, A; Henríquez-Varela, F; Monge-Bonilla, C; Lara-Maier, S; Sittenfeld-Appel, M

    2017-03-16

    Given that epileptic seizures and non-epileptic paroxysmal events have similar clinical manifestations, using specific diagnostic methods is crucial, especially in patients with drug-resistant epilepsy. Prolonged video electroencephalography monitoring during epileptic seizures reveals epileptiform discharges and has become an essential procedure for epilepsy diagnosis. The main purpose of this study is to characterise paroxysmal events and compare patterns in patients with refractory epilepsy. We conducted a retrospective analysis of medical records from 91 patients diagnosed with refractory epilepsy who underwent prolonged video electroencephalography monitoring during hospitalisation. During prolonged video electroencephalography monitoring, 76.9% of the patients (n=70) had paroxysmal events. The mean number of events was 3.4±2.7; the duration of these events was highly variable. Most patients (80%) experienced seizures during wakefulness. The most common events were focal seizures with altered levels of consciousness, progressive bilateral generalized seizures and psychogenic non-epileptic seizures. Regarding all paroxysmal events, no differences were observed in the number or type of events by sex, in duration by sex or age at onset, or in the number of events by type of event. Psychogenic nonepileptic seizures were predominantly registered during wakefulness, lasted longer, started at older ages, and were more frequent in women. Paroxysmal events recorded during prolonged video electroencephalography monitoring in patients with refractory epilepsy show similar patterns and characteristics to those reported in other latitudes. Copyright © 2017 The Author(s). Publicado por Elsevier España, S.L.U. All rights reserved.

  5. Moving Shadow Detection in Video Using Cepstrum

    Directory of Open Access Journals (Sweden)

    Fuat Cogun

    2013-01-01

    Full Text Available Moving shadows constitute problems in various applications such as image segmentation and object tracking. The main cause of these problems is the misclassification of the shadow pixels as target pixels. Therefore, the use of an accurate and reliable shadow detection method is essential to realize intelligent video processing applications. In this paper, a cepstrum-based method for moving shadow detection is presented. The proposed method is tested on outdoor and indoor video sequences using well-known benchmark test sets. To show the improvements over previous approaches, quantitative metrics are introduced and comparisons based on these metrics are made.

  6. Video change detection for fixed wing UAVs

    Science.gov (United States)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the

  7. Guest Editorial: Analysis and Retrieval of Events/Actions and Workflows in Video Streams

    DEFF Research Database (Denmark)

    Doulamis, Anastasios; Doulamis, Nikolaos; Bertini, Marco

    2016-01-01

    Cognitive video supervision and event analysis in video sequences is a critical task in many multimedia applications. Methods, tools, and algorithms that aim to detect and recognize high-level concepts and their respective spatiotemporal and causal relations in order to identify semantic video...... activities, actions, and procedures have been in the focus of the research community over the last years. This research area has strong impact on many real-life applications such as service quality assurance, compliance to the designed procedures in industrial plants, surveillance of people-dense areas (e.......g., thematic parks, critical public infrastructures), crisis management in public service areas (e.g., train stations, airports), security (detection of abnormal behaviors in surveillance videos), semantic characterization, and annotation of video streams in various domains (e.g., broadcast or user...

  8. Video Anomaly Detection with Compact Feature Sets for Online Performance.

    Science.gov (United States)

    Leyva, Roberto; Sanchez, Victor; Li, Chang-Tsun

    2017-04-18

    Over the past decade, video anomaly detection has been explored with remarkable results. However, research on methodologies suitable for online performance is still very limited. In this paper, we present an online framework for video anomaly detection. The key aspect of our framework is a compact set of highly descriptive features, which is extracted from a novel cell structure that helps to define support regions in a coarse-to-fine fashion. Based on the scene's activity, only a limited number of support regions are processed, thus limiting the size of the feature set. Specifically, we use foreground occupancy and optical flow features. The framework uses an inference mechanism that evaluates the compact feature set via Gaussian Mixture Models, Markov Chains and Bag-of-Words in order to detect abnormal events. Our framework also considers the joint response of the models in the local spatio-temporal neighborhood to increase detection accuracy. We test our framework on popular existing datasets and on a new dataset comprising a wide variety of realistic videos captured by surveillance cameras. This particular dataset includes surveillance videos depicting criminal activities, car accidents and other dangerous situations. Evaluation results show that our framework outperforms other online methods and attains a very competitive detection performance compared to state-of-the-art non-online methods.

  9. Defect detection on videos using neural network

    Directory of Open Access Journals (Sweden)

    Sizyakin Roman

    2017-01-01

    Full Text Available In this paper, we consider a method for defects detection in a video sequence, which consists of three main steps; frame compensation, preprocessing by a detector, which is base on the ranking of pixel values, and the classification of all pixels having anomalous values using convolutional neural networks. The effectiveness of the proposed method shown in comparison with the known techniques on several frames of the video sequence with damaged in natural conditions. The analysis of the obtained results indicates the high efficiency of the proposed method. The additional use of machine learning as postprocessing significantly reduce the likelihood of false alarm.

  10. Motion Entropy Feature and Its Applications to Event-Based Segmentation of Sports Video

    Science.gov (United States)

    Chen, Chen-Yu; Wang, Jia-Ching; Wang, Jhing-Fa; Hu, Yu-Hen

    2008-12-01

    An entropy-based criterion is proposed to characterize the pattern and intensity of object motion in a video sequence as a function of time. By applying a homoscedastic error model-based time series change point detection algorithm to this motion entropy curve, one is able to segment the corresponding video sequence into individual sections, each consisting of a semantically relevant event. The proposed method is tested on six hours of sports videos including basketball, soccer, and tennis. Excellent experimental results are observed.

  11. High-Level Event Recognition in Unconstrained Videos

    Science.gov (United States)

    2013-01-01

    et al. [93] adopted random forest , a collection of binary decision trees, for fast quantization. Shotton et al. [124] proposed semantic texton forests ...detection for sports video. In: Proceedings of international conference on image and video retrieval, Urbana -Champaign, IL 8. Ballan L, Bertini M, Bimbo AD...Proceedings of AAAI con- ference 93. Moosmann F, Nowak E, Jurie F (2008) Randomized clustering forests for image classification. IEEE Trans Pattern Anal

  12. An integrated framework for detecting suspicious behaviors in video surveillance

    Science.gov (United States)

    Zin, Thi Thi; Tin, Pyke; Hama, Hiromitsu; Toriu, Takashi

    2014-03-01

    In this paper, we propose an integrated framework for detecting suspicious behaviors in video surveillance systems which are established in public places such as railway stations, airports, shopping malls and etc. Especially, people loitering in suspicion, unattended objects left behind and exchanging suspicious objects between persons are common security concerns in airports and other transit scenarios. These involve understanding scene/event, analyzing human movements, recognizing controllable objects, and observing the effect of the human movement on those objects. In the proposed framework, multiple background modeling technique, high level motion feature extraction method and embedded Markov chain models are integrated for detecting suspicious behaviors in real time video surveillance systems. Specifically, the proposed framework employs probability based multiple backgrounds modeling technique to detect moving objects. Then the velocity and distance measures are computed as the high level motion features of the interests. By using an integration of the computed features and the first passage time probabilities of the embedded Markov chain, the suspicious behaviors in video surveillance are analyzed for detecting loitering persons, objects left behind and human interactions such as fighting. The proposed framework has been tested by using standard public datasets and our own video surveillance scenarios.

  13. Recommendations for recognizing video events by concept vocabularies

    Science.gov (United States)

    2014-06-01

    detectors using the human annotated training data from two publicly available resources : the TRECVID 2012 Semantic Indexing task [47,3] and the...complex events in video. Acknowledgments This research is supported by the STW STORY project, the Dutch national program COMMIT, and by the

  14. Evaluation of experimental UAV video change detection

    Science.gov (United States)

    Bartelsen, J.; Saur, G.; Teutsch, C.

    2016-10-01

    During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kr uger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect

  15. Target detection and tracking in infrared video

    Science.gov (United States)

    Deng, Zhihui; Zhu, Jihong

    2017-07-01

    In this paper, we propose a method for target detection and tracking in infrared video. The target is defined by its location and extent in a single frame. In the initialization process, we use an adaptive threshold to segment the target and then extract the fern feature and normalize it as a template. The detector uses the random forest and fern to detect the target in the infrared video. The random forest and fern is a random combination of 2bit Binary Pattern, which is robust to infrared targets with blurred and unknown contours. The tracker uses the gray-value weighted mean-Shift algorithm to track the infrared target which is always brighter than the background. And the tracker can track the deformed target efficiently and quickly. When the target disappears, the detector will redetect the target in the coming infrared image. Finally, we verify the algorithm on the real-time infrared target detection and tracking platform. The result shows that our algorithm performs better than TLD in terms of recall and runtime in infrared video.

  16. Event detection with zero example : Select the right and suppress the wrong concepts

    NARCIS (Netherlands)

    Lu, Y.J.; Zhang, H.; Boer, M.H.T. de; Ngo, C.W.

    2016-01-01

    Complex video event detection without visual examples is a very challenging issue in multimedia retrieval. We present a state-of-the-art framework for event search without any need of exemplar videos and textual metadata in search corpus. To perform event search given only query words, the core of

  17. GRADUAL TRANSITION DETECTION FOR VIDEO PARTITIONING USING MORPHOLOGICAL OPERATORS

    Directory of Open Access Journals (Sweden)

    Valery Naranjo

    2011-05-01

    Full Text Available Temporal segmentation of video data for partitioning the sequence into shots is a prerequisite in many applications: automatic video indexing and editing, old flm restoration, perceptual coding, etc. The detection of abrupt transitions or cuts has been thoroughly studied in previous works. In this paper we present a scheme to identify the most common gradual transitions, i.e., dissolves and wipes, which relies on mathematical morphology operators. The approach is restricted to fast techniques which require low computation (without motion estimation and adapted to compressed sequences and are able to cope with random brightness variations (often occurring in old flms. The present study illustrates how the morphological operators can be used to analyze temporal series for detecting particular events, either working directly on the 1D signal or building an intermediate 2D image from the 1D signals to take advantage of the spatial operators.

  18. Violent Interaction Detection in Video Based on Deep Learning

    Science.gov (United States)

    Zhou, Peipei; Ding, Qinghai; Luo, Haibo; Hou, Xinglin

    2017-06-01

    Violent interaction detection is of vital importance in some video surveillance scenarios like railway stations, prisons or psychiatric centres. Existing vision-based methods are mainly based on hand-crafted features such as statistic features between motion regions, leading to a poor adaptability to another dataset. En lightened by the development of convolutional networks on common activity recognition, we construct a FightNet to represent the complicated visual violence interaction. In this paper, a new input modality, image acceleration field is proposed to better extract the motion attributes. Firstly, each video is framed as RGB images. Secondly, optical flow field is computed using the consecutive frames and acceleration field is obtained according to the optical flow field. Thirdly, the FightNet is trained with three kinds of input modalities, i.e., RGB images for spatial networks, optical flow images and acceleration images for temporal networks. By fusing results from different inputs, we conclude whether a video tells a violent event or not. To provide researchers a common ground for comparison, we have collected a violent interaction dataset (VID), containing 2314 videos with 1077 fight ones and 1237 no-fight ones. By comparison with other algorithms, experimental results demonstrate that the proposed model for violent interaction detection shows higher accuracy and better robustness.

  19. Robust Shot Boundary Detection from Video Using Dynamic Texture

    Directory of Open Access Journals (Sweden)

    Peng Taile

    2014-03-01

    Full Text Available Video boundary detection belongs to a basis subject in computer vision. It is more important to video analysis and video understanding. The existing video boundary detection methods always are effective to certain types of video data. These methods have relatively low generalization ability. We present a novel shot boundary detection algorithm based on video dynamic texture. Firstly, the two adjacent frames are read from a given video. We normalize the two frames to get the same size frame. Secondly, we divide these frames into some sub-domain on the same standard. The following thing is to calculate the average gradient direction of sub-domain and form dynamic texture. Finally, the dynamic texture of adjacent frames is compared. We have done some experiments in different types of video data. These experimental results show that our method has high generalization ability. To different type of videos, our algorithm can achieve higher average precision and average recall relative to some algorithms.

  20. "Life in the Universe" Final Event Video Now Available

    Science.gov (United States)

    2002-02-01

    ESO Video Clip 01/02 is issued on the web in conjunction with the release of a 20-min documentary video from the Final Event of the "Life in the Universe" programme. This unique event took place in November 2001 at CERN in Geneva, as part of the 2001 European Science and Technology Week, an initiative by the European Commission to raise the public awareness of science in Europe. The "Life in the Universe" programme comprised competitions in 23 European countries to identify the best projects from school students. The projects could be scientific or a piece of art, a theatrical performance, poetry or even a musical performance. The only restriction was that the final work must be based on scientific evidence. Winning teams from each country were invited to a "Final Event" at CERN on 8-11 November, 2001 to present their projects to a panel of International Experts during a special three-day event devoted to understanding the possibility of other life forms existing in our Universe. This Final Event also included a spectacular 90-min webcast from CERN with the highlights of the programme. The video describes the Final Event and the enthusiastic atmosphere when more than 200 young students and teachers from all over Europe met with some of the world's leading scientific experts of the field. The present video clip, with excerpts from the film, is available in four versions: two MPEG files and two streamer-versions of different sizes; the latter require RealPlayer software. Video Clip 01/02 may be freely reproduced. The 20-min video is available on request from ESO, for viewing in VHS and, for broadcasters, in Betacam-SP format. Please contact the ESO EPR Department for more details. Life in the Universe was jointly organised by the European Organisation for Nuclear Research (CERN) , the European Space Agency (ESA) and the European Southern Observatory (ESO) , in co-operation with the European Association for Astronomy Education (EAAE). Other research organisations were

  1. Automatic blood detection in capsule endoscopy video

    Science.gov (United States)

    Novozámský, Adam; Flusser, Jan; Tachecí, Ilja; Sulík, Lukáš; Bureš, Jan; Krejcar, Ondřej

    2016-12-01

    We propose two automatic methods for detecting bleeding in wireless capsule endoscopy videos of the small intestine. The first one uses solely the color information, whereas the second one incorporates the assumptions about the blood spot shape and size. The original idea is namely the definition of a new color space that provides good separability of blood pixels and intestinal wall. Both methods can be applied either individually or their results can be fused together for the final decision. We evaluate their individual performance and various fusion rules on real data, manually annotated by an endoscopist.

  2. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  3. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video

    Directory of Open Access Journals (Sweden)

    Gil-beom Lee

    2017-03-01

    Full Text Available Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  4. Video Salient Object Detection via Fully Convolutional Networks.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further

  5. Bi-Level Semantic Representation Analysis for Multimedia Event Detection.

    Science.gov (United States)

    Chang, Xiaojun; Ma, Zhigang; Yang, Yi; Zeng, Zhiqiang; Hauptmann, Alexander G

    2017-05-01

    Multimedia event detection has been one of the major endeavors in video event analysis. A variety of approaches have been proposed recently to tackle this problem. Among others, using semantic representation has been accredited for its promising performance and desirable ability for human-understandable reasoning. To generate semantic representation, we usually utilize several external image/video archives and apply the concept detectors trained on them to the event videos. Due to the intrinsic difference of these archives, the resulted representation is presumable to have different predicting capabilities for a certain event. Notwithstanding, not much work is available for assessing the efficacy of semantic representation from the source-level. On the other hand, it is plausible to perceive that some concepts are noisy for detecting a specific event. Motivated by these two shortcomings, we propose a bi-level semantic representation analyzing method. Regarding source-level, our method learns weights of semantic representation attained from different multimedia archives. Meanwhile, it restrains the negative influence of noisy or irrelevant concepts in the overall concept-level. In addition, we particularly focus on efficient multimedia event detection with few positive examples, which is highly appreciated in the real-world scenario. We perform extensive experiments on the challenging TRECVID MED 2013 and 2014 datasets with encouraging results that validate the efficacy of our proposed approach.

  6. Performance evaluation software moving object detection and tracking in videos

    CERN Document Server

    Karasulu, Bahadir

    2013-01-01

    Performance Evaluation Software: Moving Object Detection and Tracking in Videos introduces a software approach for the real-time evaluation and performance comparison of the methods specializing in moving object detection and/or tracking (D&T) in video processing. Digital video content analysis is an important item for multimedia content-based indexing (MCBI), content-based video retrieval (CBVR) and visual surveillance systems. There are some frequently-used generic algorithms for video object D&T in the literature, such as Background Subtraction (BS), Continuously Adaptive Mean-shift (CMS),

  7. Learning Multimodal Deep Representations for Crowd Anomaly Event Detection

    Directory of Open Access Journals (Sweden)

    Shaonian Huang

    2018-01-01

    Full Text Available Anomaly event detection in crowd scenes is extremely important; however, the majority of existing studies merely use hand-crafted features to detect anomalies. In this study, a novel unsupervised deep learning framework is proposed to detect anomaly events in crowded scenes. Specifically, low-level visual features, energy features, and motion map features are simultaneously extracted based on spatiotemporal energy measurements. Three convolutional restricted Boltzmann machines are trained to model the mid-level feature representation of normal patterns. Then a multimodal fusion scheme is utilized to learn the deep representation of crowd patterns. Based on the learned deep representation, a one-class support vector machine model is used to detect anomaly events. The proposed method is evaluated using two available public datasets and compared with state-of-the-art methods. The experimental results show its competitive performance for anomaly event detection in video surveillance.

  8. Automatic polyp detection in colonoscopy videos

    Science.gov (United States)

    Yuan, Zijie; IzadyYazdanabadi, Mohammadhassan; Mokkapati, Divya; Panvalkar, Rujuta; Shin, Jae Y.; Tajbakhsh, Nima; Gurudu, Suryakanth; Liang, Jianming

    2017-02-01

    Colon cancer is the second cancer killer in the US [1]. Colonoscopy is the primary method for screening and prevention of colon cancer, but during colonoscopy, a significant number (25% [2]) of polyps (precancerous abnormal growths inside of the colon) are missed; therefore, the goal of our research is to reduce the polyp miss-rate of colonoscopy. This paper presents a method to detect polyp automatically in a colonoscopy video. Our system has two stages: Candidate generation and candidate classification. In candidate generation (stage 1), we chose 3,463 frames (including 1,718 with-polyp frames) from real-time colonoscopy video database. We first applied processing procedures, namely intensity adjustment, edge detection and morphology operations, as pre-preparation. We extracted each connected component (edge contour) as one candidate patch from the pre-processed image. With the help of ground truth (GT) images, 2 constraints were implemented on each candidate patch, dividing and saving them into polyp group and non-polyp group. In candidate classification (stage 2), we trained and tested convolutional neural networks (CNNs) with AlexNet architecture [3] to classify each candidate into with-polyp or non-polyp class. Each with-polyp patch was processed by rotation, translation and scaling for invariant to get a much robust CNNs system. We applied leave-2-patients-out cross-validation on this model (4 of 6 cases were chosen as training set and the rest 2 were as testing set). The system accuracy and sensitivity are 91.47% and 91.76%, respectively.

  9. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    Science.gov (United States)

    Allin, T.; Neubert, T.; Laursen, S.; Rasmussen, I. L.; Soula, S.

    2003-12-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields and thus eliminated the need for continuous human monitoring of TLE activity. The computer recorded and analyzed two parallel video streams at the full 50 Hz field rate, while uploading status images, TLE images, and system logs to a remote web server. The system detected more than 130 TLEs - mostly sprites - distributed over 9 active evenings. We have thus demonstrated the feasibility of remote agents for TLE observations, which are likely to find use in future ground-based TLE observation campaigns, or to be installed at remote sites in support for space-borne or other global TLE observation efforts.

  10. Object detection in surveillance video from dense trajectories

    OpenAIRE

    Zhai, Mengyao

    2015-01-01

    Detecting objects such as humans or vehicles is a central problem in surveillance video. Myriad standard approaches exist for this problem. At their core, approaches consider either the appearance of people, patterns of their motion, or differences from the background. In this paper we build on dense trajectories, a state-of-the-art approach for describing spatio-temporal patterns in video sequences. We demonstrate an application of dense trajectories to object detection in surveillance video...

  11. State-based Event Detection Optimization for Complex Event Processing

    Directory of Open Access Journals (Sweden)

    Shanglian PENG

    2014-02-01

    Full Text Available Detection of patterns in high speed, large volume of event streams has been an important paradigm in many application areas of Complex Event Processing (CEP including security monitoring, financial markets analysis and health-care monitoring. To assure real-time responsive complex pattern detection over high volume and speed event streams, efficient event detection techniques have to be designed. Unfortunately evaluation of the Nondeterministic Finite Automaton (NFA based event detection model mainly considers single event query and its optimization. In this paper, we propose multiple event queries evaluation on event streams. In particular, we consider scalable multiple event detection model that shares NFA transfer states of different event queries. For each event query, the event query is parse into NFA and states of the NFA are partitioned into different units. With this partition, the same individual state of NFA is run on different processing nodes, providing states sharing and reducing partial matches maintenance. We compare our state-based approach with Stream-based And Shared Event processing (SASE. Our experiments demonstrate that state-based approach outperforms SASE both on CPU time usage and memory consumption.

  12. Automatic inpainting scheme for video text detection and removal.

    Science.gov (United States)

    Mosleh, Ali; Bouguila, Nizar; Ben Hamza, Abdessamad

    2013-11-01

    We present a two stage framework for automatic video text removal to detect and remove embedded video texts and fill-in their remaining regions by appropriate data. In the video text detection stage, text locations in each frame are found via an unsupervised clustering performed on the connected components produced by the stroke width transform (SWT). Since SWT needs an accurate edge map, we develop a novel edge detector which benefits from the geometric features revealed by the bandlet transform. Next, the motion patterns of the text objects of each frame are analyzed to localize video texts. The detected video text regions are removed, then the video is restored by an inpainting scheme. The proposed video inpainting approach applies spatio-temporal geometric flows extracted by bandlets to reconstruct the missing data. A 3D volume regularization algorithm, which takes advantage of bandlet bases in exploiting the anisotropic regularities, is introduced to carry out the inpainting task. The method does not need extra processes to satisfy visual consistency. The experimental results demonstrate the effectiveness of both our proposed video text detection approach and the video completion technique, and consequently the entire automatic video text removal and restoration process.

  13. Video Salient Object Detection via Fully Convolutional Networks

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    2018-01-01

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: (1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data, and (2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image datasets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the DAVIS dataset (MAE of .06) and the FBMS dataset (MAE of .07), and do so with much improved speed (2fps with all steps).

  14. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper

    2009-01-01

    numbers of video streams on a single server. The focus of the work is on using the information in coded video streams to reduce the computational complexity and memory requirements, which translates into reduced hardware requirements and costs. The devised algorithm detects and segments activity based...

  15. Video Shot Boundary Detection based on Multifractal Analisys

    Directory of Open Access Journals (Sweden)

    B. D. Reljin

    2011-11-01

    Full Text Available Extracting video shots is an essential preprocessing step to almost all video analysis, indexing, and other content-based operations. This process is equivalent to detecting the shot boundaries in a video. In this paper we presents video Shot Boundary Detection (SBD based on Multifractal Analysis (MA. Low-level features (color and texture features are extracted from each frame in video sequence. Features are concatenated in feature vectors (FVs and stored in feature matrix. Matrix rows correspond to FVs of frames from video sequence, while columns are time series of particular FV component. Multifractal analysis is applied to FV component time series, and shot boundaries are detected as high singularities of time series above pre defined treshold. Proposed SBD method is tested on real video sequence with 64 shots, with manually labeled shot boundaries. Detection accuracy depends on number FV components used. For only one FV component detection accuracy lies in the range 76-92% (depending on selected threshold, while by combining two FV components all shots are detected completely (accuracy of 100%.

  16. A Large-scale Benchmark Dataset for Event Recognition in Surveillance Video

    Science.gov (United States)

    2011-06-01

    the stationary dataset, we include downsampled versions of dataset obtained by down- sampling the original HD videos to lower framerates and pixel...when video framerates and pixel resolutions are low. This is a relatively unexplored area 3155 Figure 2. Six example scenes in VIRAT Video Dataset...A Large-scale Benchmark Dataset for Event Recognition in Surveillance Video Sangmin Oh, Anthony Hoogs, Amitha Perera, Naresh Cuntoor, Chia-Chih Chen

  17. ALOGORITHMS FOR AUTOMATIC RUNWAY DETECTION ON VIDEO SEQUENCES

    Directory of Open Access Journals (Sweden)

    A. I. Logvin

    2015-01-01

    Full Text Available The article discusses algorithm for automatic runway detection on video sequences. The main stages of algorithm are represented. Some methods to increase reliability of recognition are described.

  18. A simple strategy for fall events detection

    KAUST Repository

    Harrou, Fouzi

    2017-01-20

    The paper concerns the detection of fall events based on human silhouette shape variations. The detection of fall events is addressed from the statistical point of view as an anomaly detection problem. Specifically, the paper investigates the multivariate exponentially weighted moving average (MEWMA) control chart to detect fall events. Towards this end, a set of ratios for five partial occupancy areas of the human body for each frame are collected and used as the input data to MEWMA chart. The MEWMA fall detection scheme has been successfully applied to two publicly available fall detection databases, the UR fall detection dataset (URFD) and the fall detection dataset (FDD). The monitoring strategy developed was able to provide early alert mechanisms in the event of fall situations.

  19. SIGMATA: Storage Integrity Guaranteeing Mechanism against Tampering Attempts for Video Event Data Recorders

    Directory of Open Access Journals (Sweden)

    Hyuckmin Kwon

    2016-04-01

    Full Text Available The usage and market size of video event data recorders (VEDRs, also known as car black boxes, are rapidly increasing. Since VEDRs can provide more visual information about car accident situations than any other device that is currently used for accident investigations (e.g., closed-circuit television, the integrity of the VEDR contents is important to any meaningful investigation. Researchers have focused on the file system integrity or photographic approaches to integrity verification. However, unlike other general data, the video data in VEDRs exhibit a unique I/O behavior in that the videos are stored chronologically. In addition, the owners of VEDRs can manipulate unfavorable scenes after accidents to conceal their recorded behavior. Since prior arts do not consider the time relationship between the frames and fail to discover frame-wise forgery, a more detailed integrity assurance is required. In this paper, we focus on the development of a frame-wise forgery detection mechanism that resolves the limitations of previous mechanisms. We introduce SIGMATA, a novel storage integrity guaranteeing mechanism against tampering attempts for VEDRs. We describe its operation, demonstrate its effectiveness for detecting possible frame-wise forgery, and compare it with existing mechanisms. The result shows that the existing mechanisms fail to detect any frame-wise forgery, while our mechanism thoroughly detects every frame-wise forgery. We also evaluate its computational overhead using real VEDR videos. The results show that SIGMATA indeed discovers frame-wise forgery attacks effectively and efficiently, with the encoding overhead less than 1.5 milliseconds per frame.

  20. Moving Shadow Detection in Video Using Cepstrum Regular Paper

    OpenAIRE

    Cogun, Fuat; Cetin, Ahmet Enis

    2013-01-01

    Moving shadows constitute problems in various applications such as image segmentation and object tracking. The main cause of these problems is the misclassification of the shadow pixels as target pixels. Therefore, the use of an accurate and reliable shadow detection method is essential to realize intelligent video processing applications. In this paper, a cepstrum‐based method for moving shadow detection is presented. The proposed method is tested on outdoor and indoor video sequences using ...

  1. Transition logo detection for sports videos highlight extraction

    Science.gov (United States)

    Su, Po-Chyi; Wang, Yu-Wei; Chen, Chien-Chang

    2006-10-01

    This paper presents a highlight extraction scheme for sports videos. The approach makes use of the transition logos inserted preceding and following the slow motion replays by the broadcaster, which demonstrate highlights of the game. First, the features of a MPEG compressed video are retrieved for subsequent processing. After the shot boundary detection procedure, the processing units are formed and the units with fast moving scenes are then selected. Finally, the detection of overlaying objects is performed to signal the appearance of a transition logo. Experimental results show the feasibility of this promising method for sports videos highlight extraction.

  2. A novel video dataset for change detection benchmarking.

    Science.gov (United States)

    Goyette, Nil; Jodoin, Pierre-Marc; Porikli, Fatih; Konrad, Janusz; Ishwar, Prakash

    2014-11-01

    Change detection is one of the most commonly encountered low-level tasks in computer vision and video processing. A plethora of algorithms have been developed to date, yet no widely accepted, realistic, large-scale video data set exists for benchmarking different methods. Presented here is a unique change detection video data set consisting of nearly 90 000 frames in 31 video sequences representing six categories selected to cover a wide range of challenges in two modalities (color and thermal infrared). A distinguishing characteristic of this benchmark video data set is that each frame is meticulously annotated by hand for ground-truth foreground, background, and shadow area boundaries-an effort that goes much beyond a simple binary label denoting the presence of change. This enables objective and precise quantitative comparison and ranking of video-based change detection algorithms. This paper discusses various aspects of the new data set, quantitative performance metrics used, and comparative results for over two dozen change detection algorithms. It draws important conclusions on solved and remaining issues in change detection, and describes future challenges for the scientific community. The data set, evaluation tools, and algorithm rankings are available to the public on a website and will be updated with feedback from academia and industry in the future.

  3. Real-time Multiple Abnormality Detection in Video Data

    DEFF Research Database (Denmark)

    Have, Simon Hartmann; Ren, Huamin; Moeslund, Thomas B.

    2013-01-01

    Automatic abnormality detection in video sequences has recently gained an increasing attention within the research community. Although progress has been seen, there are still some limitations in current research. While most systems are designed at detecting specific abnormality, others which...... are capable of detecting more than two types of abnormalities rely on heavy computation. Therefore, we provide a framework for detecting abnormalities in video surveillance by using multiple features and cascade classifiers, yet achieve above real-time processing speed. Experimental results on two datasets...

  4. Tracking Large-Scale Video Remix in Real-World Events

    OpenAIRE

    Xie, Lexing; Natsev, Apostol; He, Xuming; Kender, John; Hill, Matthew; Smith, John R

    2012-01-01

    Social information networks, such as YouTube, contains traces of both explicit online interaction (such as "like", leaving a comment, or subscribing to video feed), and latent interactions (such as quoting, or remixing parts of a video). We propose visual memes, or frequently re-posted short video segments, for tracking such latent video interactions at scale. Visual memes are extracted by scalable detection algorithms that we develop, with high accuracy. We further augment visual memes with ...

  5. Event Detection Using "Variable Module Graphs" for Home Care Applications

    Directory of Open Access Journals (Sweden)

    Sethi Amit

    2007-01-01

    Full Text Available Technology has reached new heights making sound and video capture devices ubiquitous and affordable. We propose a paradigm to exploit this technology for home care applications especially for surveillance and complex event detection. Complex vision tasks such as event detection in a surveillance video can be divided into subtasks such as human detection, tracking, recognition, and trajectory analysis. The video can be thought of as being composed of various features. These features can be roughly arranged in a hierarchy from low-level features to high-level features. Low-level features include edges and blobs, and high-level features include objects and events. Loosely, the low-level feature extraction is based on signal/image processing techniques, while the high-level feature extraction is based on machine learning techniques. Traditionally, vision systems extract features in a feed-forward manner on the hierarchy, that is, certain modules extract low-level features and other modules make use of these low-level features to extract high-level features. Along with others in the research community, we have worked on this design approach. In this paper, we elaborate on recently introduced V/M graph. We present our work on using this paradigm for developing applications for home care applications. Primary objective is surveillance of location for subject tracking as well as detecting irregular or anomalous behavior. This is done automatically with minimal human involvement, where the system has been trained to raise an alarm when anomalous behavior is detected.

  6. Building 3D Event Logs for Video Investigation

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2015-01-01

    In scene investigation, creating a video log captured using a handheld camera is more convenient and more complete than taking photos and notes. By introducing video analysis and computer vision techniques, it is possible to build a spatio-temporal representation of the investigation. Such a

  7. System events: readily accessible features for surgical phase detection.

    Science.gov (United States)

    Malpani, Anand; Lea, Colin; Chen, Chi Chiung Grace; Hager, Gregory D

    2016-06-01

    Surgical phase recognition using sensor data is challenging due to high variation in patient anatomy and surgeon-specific operating styles. Segmenting surgical procedures into constituent phases is of significant utility for resident training, education, self-review, and context-aware operating room technologies. Phase annotation is a highly labor-intensive task and would benefit greatly from automated solutions. We propose a novel approach using system events-for example, activation of cautery tools-that are easily captured in most surgical procedures. Our method involves extracting event-based features over 90-s intervals and assigning a phase label to each interval. We explore three classification techniques: support vector machines, random forests, and temporal convolution neural networks. Each of these models independently predicts a label for each time interval. We also examine segmental inference using an approach based on the semi-Markov conditional random field, which jointly performs phase segmentation and classification. Our method is evaluated on a data set of 24 robot-assisted hysterectomy procedures. Our framework is able to detect surgical phases with an accuracy of 74 % using event-based features over a set of five different phases-ligation, dissection, colpotomy, cuff closure, and background. Precision and recall values for the cuff closure (Precision: 83 %, Recall: 98 %) and dissection (Precision: 75 %, Recall: 88 %) classes were higher than other classes. The normalized Levenshtein distance between predicted and ground truth phase sequence was 25 %. Our findings demonstrate that system events features are useful for automatically detecting surgical phase. Events contain phase information that cannot be obtained from motion data and that would require advanced computer vision algorithms to extract from a video. Many of these events are not specific to robotic surgery and can easily be recorded in non-robotic surgical modalities. In future

  8. Automatic video surveillance of outdoor scenes using track before detect

    DEFF Research Database (Denmark)

    Hansen, Morten; Sørensen, Helge Bjarup Dissing; Birkemark, Christian M.

    2005-01-01

    This paper concerns automatic video surveillance of outdoor scenes using a single camera. The first step in automatic interpretation of the video stream is activity detection based on background subtraction. Usually, this process will generate a large number of false alarms in outdoor scenes due ...... if a detected object shows a pattern of movement consistent with predefined rules. The method is tested on a number of video sequences and a substantial reduction in the number of false alarms is demonstrated.......This paper concerns automatic video surveillance of outdoor scenes using a single camera. The first step in automatic interpretation of the video stream is activity detection based on background subtraction. Usually, this process will generate a large number of false alarms in outdoor scenes due...... to e.g. movement of thicket and changes in illumination. To reduce the number of false alarms a Track Before Detect (TBD) approach is suggested. In this TBD implementation all objects detected in the background subtraction process are followed over a number of frames. An alarm is given only...

  9. A novel visual saliency detection method for infrared video sequences

    Science.gov (United States)

    Wang, Xin; Zhang, Yuzhen; Ning, Chen

    2017-12-01

    Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.

  10. Heterogeneous CPU-GPU moving targets detection for UAV video

    Science.gov (United States)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  11. A time-varying subjective quality model for mobile streaming videos with stalling events

    Science.gov (United States)

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C.

    2015-09-01

    Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users' quality of experience (QoE). Developing models that can accurately predict users' QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer's recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events - factors that interact in a complex way to affect a user's QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.

  12. Current Events and Technology: Video and Audio on the Internet.

    Science.gov (United States)

    Laposata, Matthew M.; Howick, Tom; Dias, Michael J.

    2002-01-01

    Explains the effectiveness of visual aids compared to written materials in teaching and recommends using television segments for teaching purposes. Introduces digitized clips provided by major television news organizations through the Internet and describes the technology requirements for successful viewing of streaming videos and audios. (YDS)

  13. Human Rights Event Detection from Heterogeneous Social Media Graphs.

    Science.gov (United States)

    Chen, Feng; Neill, Daniel B

    2015-03-01

    Human rights organizations are increasingly monitoring social media for identification, verification, and documentation of human rights violations. Since manual extraction of events from the massive amount of online social network data is difficult and time-consuming, we propose an approach for automated, large-scale discovery and analysis of human rights-related events. We apply our recently developed Non-Parametric Heterogeneous Graph Scan (NPHGS), which models social media data such as Twitter as a heterogeneous network (with multiple different node types, features, and relationships) and detects emerging patterns in the network, to identify and characterize human rights events. NPHGS efficiently maximizes a nonparametric scan statistic (an aggregate measure of anomalousness) over connected subgraphs of the heterogeneous network to identify the most anomalous network clusters. It summarizes each event with information such as type of event, geographical locations, time, and participants, and provides documentation such as links to videos and news reports. Building on our previous work that demonstrates the utility of NPHGS for civil unrest prediction and rare disease outbreak detection, we present an analysis of human rights events detected by NPHGS using two years of Twitter data from Mexico. NPHGS was able to accurately detect relevant clusters of human rights-related tweets prior to international news sources, and in some cases, prior to local news reports. Analysis of social media using NPHGS could enhance the information-gathering missions of human rights organizations by pinpointing specific abuses, revealing events and details that may be blocked from traditional media sources, and providing evidence of emerging patterns of human rights violations. This could lead to more timely, targeted, and effective advocacy, as well as other potential interventions.

  14. Runway Detection From Map, Video and Aircraft Navigational Data

    Science.gov (United States)

    2016-03-01

    are corrected using image-processing techniques, such as the Hough transform for linear features. 14. SUBJECT TERMS runway, map, aircraft...video, detection, rotation matrix, Hough transform. 15. NUMBER OF PAGES 87 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18...as the Hough transform for linear features. vi THIS PAGE INTENTIONALLY LEFT BLANK vii TABLE OF CONTENTS I. INTRODUCTION

  15. Context-aware event detection smartphone application for first responders

    Science.gov (United States)

    Boddhu, Sanjay K.; Dave, Rakesh P.; McCartney, Matt; West, James A.; Williams, Robert L.

    2013-05-01

    The rise of social networking platforms like Twitter, Facebook, etc…, have provided seamless sharing of information (as chat, video and other media) among its user community on a global scale. Further, the proliferation of the smartphones and their connectivity networks has powered the ordinary individuals to share and acquire information regarding the events happening in his/her immediate vicinity in a real-time fashion. This human-centric sensed data being generated in "human-as-sensor" approach is tremendously valuable as it delivered mostly with apt annotations and ground truth that would be missing in traditional machine-centric sensors, besides high redundancy factor (same data thru multiple users). Further, when appropriately employed this real-time data can support in detecting localized events like fire, accidents, shooting, etc…, as they unfold and pin-point individuals being affected by those events. This spatiotemporal information, when made available for first responders in the event vicinity (or approaching it) can greatly assist them to make effective decisions to protect property and life in a timely fashion. In this vein, under SATE and YATE programs, the research team at AFRL Tec^Edge Discovery labs had demonstrated the feasibility of developing Smartphone applications, that can provide a augmented reality view of the appropriate detected events in a given geographical location (localized) and also provide an event search capability over a large geographic extent. In its current state, the application thru its backend connectivity utilizes a data (Text & Image) processing framework, which deals with data challenges like; identifying and aggregating important events, analyzing and correlating the events temporally and spatially and building a search enabled event database. Further, the smartphone application with its backend data processing workflow has been successfully field tested with live user generated feeds.

  16. Semantic Context Detection Using Audio Event Fusion

    Directory of Open Access Journals (Sweden)

    Cheng Wen-Huang

    2006-01-01

    Full Text Available Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model and discriminative (support vector machine (SVM approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics.

  17. Polyp Detection and Segmentation from Video Capsule Endoscopy: A Review

    Directory of Open Access Journals (Sweden)

    V. B. Surya Prasath

    2016-12-01

    Full Text Available Video capsule endoscopy (VCE is used widely nowadays for visualizing the gastrointestinal (GI tract. Capsule endoscopy exams are prescribed usually as an additional monitoring mechanism and can help in identifying polyps, bleeding, etc. To analyze the large scale video data produced by VCE exams, automatic image processing, computer vision, and learning algorithms are required. Recently, automatic polyp detection algorithms have been proposed with various degrees of success. Though polyp detection in colonoscopy and other traditional endoscopy procedure based images is becoming a mature field, due to its unique imaging characteristics, detecting polyps automatically in VCE is a hard problem. We review different polyp detection approaches for VCE imagery and provide systematic analysis with challenges faced by standard image processing and computer vision methods.

  18. TNO at TRECVID 2008, Combining Audio and Video Fingerprinting for Robust Copy Detection

    NARCIS (Netherlands)

    Doets, P.J.; Eendebak, P.T.; Ranguelova, E.; Kraaij, W.

    2009-01-01

    TNO has evaluated a baseline audio and a video fingerprinting system based on robust hashing for the TRECVID 2008 copy detection task. We participated in the audio, the video and the combined audio-video copy detection task. The audio fingerprinting implementation clearly outperformed the video

  19. Vehicle Plate Detection in Car Black Box Video

    Directory of Open Access Journals (Sweden)

    Dongjin Park

    2017-01-01

    Full Text Available Internet services that share vehicle black box videos need a way to obfuscate license plates in uploaded videos because of privacy issues. Thus, plate detection is one of the critical functions that such services rely on. Even though various types of detection methods are available, they are not suitable for black box videos because no assumption about size, number of plates, and lighting conditions can be made. We propose a method to detect Korean vehicle plates from black box videos. It works in two stages: the first stage aims to locate a set of candidate plate regions and the second stage identifies only actual plates from candidates by using a support vector machine classifier. The first stage consists of five sequential substeps. At first, it produces candidate regions by combining single character areas and then eliminates candidate regions that fail to meet plate conditions through the remaining substeps. For the second stage, we propose a feature vector that captures the characteristics of plates in texture and color. For performance evaluation, we compiled our dataset which contains 2,627 positive and negative images. The evaluation results show that the proposed method improves accuracy and sensitivity by at least 5% and is 30 times faster compared with an existing method.

  20. Google Glass Video Capture of Cardiopulmonary Resuscitation Events: A Pilot Simulation Study.

    Science.gov (United States)

    Kassutto, Stacey M; Kayser, Joshua B; Kerlin, Meeta P; Upton, Mark; Lipschik, Gregg; Epstein, Andrew J; Dine, C Jessica; Schweickert, William

    2017-12-01

    Video recording of resuscitation from fixed camera locations has been used to assess adherence to guidelines and provide feedback on performance. However, inpatient cardiac arrests often happen in unpredictable locations and crowded rooms, making video recording of these events problematic. We sought to understand the feasibility of Google Glass (GG) as a method for recording inpatient cardiac arrests and capturing salient resuscitation factors for post-event review. This observational study involved recording simulated cardiac arrest events on inpatient medical wards. Each simulation was reviewed by 3 methods: in-room physician direct observation, stationary video camera (SVC), and GG. Nurse and physician specialists analyzed the videos for global visibility and audibility, as well as recording quality of predefined resuscitation events and behaviors. Resident code leaders were surveyed regarding attitudes toward GG use in the clinical emergency setting. Of 11 simulated cardiac arrest events, 9 were successfully recorded by all observation methods (1 GG failure, 1 SVC failure). GG was judged slightly better than SVC recording for average global visualization (3.95 versus 3.15, P = .0003) and average global audibility (4.77 versus 4.42, P = .002). Of the GG videos, 19% had limitations in overall interpretability compared with 35% of SVC recordings (P = .039). All 10 survey respondents agreed that GG was easy to use; however, 2 found it distracting and 3 were uncomfortable with future use during actual resuscitations. GG is a feasible and acceptable method for capturing simulated resuscitation events in the inpatient setting.

  1. video supported performance feedback to nursing students after simulated practice events.

    OpenAIRE

    Monger, Eloise; Weal, Mark J.; Gobbi, Mary; Michaelides, Danius; Shepherd, Matthew; Wilson, Matthew; Barnard, Thomas

    2008-01-01

    Within the field of health care education, simulation is used increasingly to provide students with opportunities to develop their clinical skills (Alnier, 2006), often occurring in specially designed facilities with audio-video capture of student performance. The video capture enables analysis and assessment of student performance and or competence, the analysis of events (DiGiacomo et al, 1997), processes (Ram et al, 1999), and Objective Clinical Examinations (Humphris and Kaney, 2000 ; Viv...

  2. Real-time logo detection and tracking in video

    Science.gov (United States)

    George, M.; Kehtarnavaz, N.; Rahman, M.; Carlsohn, M.

    2010-05-01

    This paper presents a real-time implementation of a logo detection and tracking algorithm in video. The motivation of this work stems from applications on smart phones that require the detection of logos in real-time. For example, one application involves detecting company logos so that customers can easily get special offers in real-time. This algorithm uses a hybrid approach by initially running the Scale Invariant Feature Transform (SIFT) algorithm on the first frame in order to obtain the logo location and then by using an online calibration of color within the SIFT detected area in order to detect and track the logo in subsequent frames in a time efficient manner. The results obtained indicate that this hybrid approach allows robust logo detection and tracking to be achieved in real-time.

  3. Deteksi Perubahan Citra Pada Video Menggunakan Illumination Invariant Change Detection

    Directory of Open Access Journals (Sweden)

    Adri Priadana

    2017-01-01

    Full Text Available There is still a lot of juvenile delinquency in the middle of the community, especially people in urban areas, in the modern era. Juvenile delinquency may be fights, wild racing, gambling, and graffiti on the walls without permission. Vandalized wall is usually done on walls of office buildings and on public or private property. Results from vandalized walls can be seen from the image of the change between the initial image with the image after a motion. This study develops a image change detection system in video to detect the action of graffiti on the wall via a Closed-Circuit Television camera (CCTV which is done by simulation using the webcam camera. Motion detection process with Accumulative Differences Images (ADI method and image change detection process with Illumination Invariant Change Detection method coupled with image cropping method which carried out a comparison between the a reference image or image before any movement with the image after there is movement. Detection system testing one by different times variations, ie in the morning, noon, afternoon, and evening. The proposed method for image change detection in video give results with an accuracy rate of 92.86%.

  4. Video produced for Andrew Lankford's talk at Google's Zeitgeist event on 2010.

    CERN Multimedia

    ATLAS Experiment

    2011-01-01

    A video made for the ATLAS talk at Google's Zeitgeist event during fall 2010, given by deputy spokesperson Andrew Lankford. The event, organized by Google, invited leaders of our time to discuss perspectives on global issues. For more information about the event go to http://www.zeitgeistminds.com/about/. The recording of the talk is available at: http://www.youtube.com/watch?v=VjIJS8zUimU

  5. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  6. Detection of comfortable temperature based on thermal events detection indoors

    Science.gov (United States)

    Szczurek, Andrzej; Maciejewska, Monika; Uchroński, Mariusz

    2017-11-01

    This work focussed on thermal comfort as the basis to control indoor conditions. Its objective is a method to determine thermal preferences of office occupants. The method is based on detection of thermal events. They occur when indoor conditions are under control of occupants. Thermal events are associated with the use of local heating/cooling sources which have user-adjustable settings. The detection is based on Fourier analysis of indoor temperature time series. The relevant data is collected by temperature sensor. We achieved thermal events recognition rate of 86 %. Conditions when indoor conditions were beyond control were detected with 95.6 % success rate. Using experimental data it was demonstrated that the method allows to reproduce key elements of temperature statistics associated with conditions when occupants are in control of thermal comfort.

  7. Video Pedestrian Detection Based on Orthogonal Scene Motion Pattern

    Directory of Open Access Journals (Sweden)

    Jianming Qu

    2014-01-01

    Full Text Available In fixed video scenes, scene motion patterns can be a very useful prior knowledge for pedestrian detection which is still a challenge at present. A new approach of cascade pedestrian detection using an orthogonal scene motion pattern model in a general density video is developed in this paper. To statistically model the pedestrian motion pattern, a probability grid overlaying the whole scene is set up to partition the scene into paths and holding areas. Features extracted from different pattern areas are classified by a group of specific strategies. Instead of using a unitary classifier, the employed classifier is composed of two directional subclassifiers trained, respectively, with different samples which are selected by two orthogonal directions. Considering that the negative images from the detection window scanning are much more than the positive ones, the cascade AdaBoost technique is adopted by the subclassifiers to reduce the negative image computations. The proposed approach is proved effectively by static classification experiments and surveillance video experiments.

  8. Sign Language Video Processing for Text Detection in Hindi Language

    Directory of Open Access Journals (Sweden)

    Rashmi B Hiremath

    2016-10-01

    Full Text Available Sign language is a way of expressing yourself with your body language, where every bit of ones expressions, goals, or sentiments are conveyed by physical practices, for example, outward appearances, body stance, motions, eye movements, touch and the utilization of space. Non-verbal communication exists in both creatures and people, yet this article concentrates on elucidations of human non-verbal or sign language interpretation into Hindi textual expression. The proposed method of implementation utilizes the image processing methods and synthetic intelligence strategies to get the goal of sign video recognition. To carry out the proposed task implementation it uses image processing methods such as frame analysing based tracking, edge detection, wavelet transform, erosion, dilation, blur elimination, noise elimination, on training videos. It also uses elliptical Fourier descriptors called SIFT for shape feature extraction and most important part analysis for feature set optimization and reduction. For result analysis, this paper uses different category videos such as sign of weeks, months, relations etc. Database of extracted outcomes are compared with the video fed to the system as a input of the signer by a trained unclear inference system.

  9. Logo detection and classification in a sport video: video indexing for sponsorship revenue control

    Science.gov (United States)

    Kovar, Bohumil; Hanjalic, Alan

    2001-12-01

    This paper presents a novel approach to detecting and classifying a trademark logo in frames of a sport video. In view of the fact that we attempt to detect and recognize a logo in a natural scene, the algorithm developed in this paper differs from traditional techniques for logo detection and classification that are applicable either to well-structured general text documents (e.g. invoices, memos, bank cheques) or to specialized trademark logo databases, where logos appear isolated on a clear background and where their detection and classification is not disturbed by the surrounding visual detail. Although the development of our algorithm is still in its starting phase, experimental results performed so far on a set of soccer TV broadcasts are very encouraging.

  10. Classification of extreme facial events in sign language videos

    National Research Council Canada - National Science Library

    Antonakos, Epameinondas; Pitsikalis, Vassilis; Maragos, Petros

    2014-01-01

    .... ESC is applied on various facial cues - as, for instance, pose rotations, head movements and eye blinking - leading to the detection of extreme states such as left/right, up/down and open/closed...

  11. Visual Sensor Based Abnormal Event Detection with Moving Shadow Removal in Home Healthcare Applications

    OpenAIRE

    Young-Sook Lee; Wan-Young Chung

    2012-01-01

    Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects...

  12. Recognising safety critical events: can automatic video processing improve naturalistic data analyses?

    Science.gov (United States)

    Dozza, Marco; González, Nieves Pañeda

    2013-11-01

    New trends in research on traffic accidents include Naturalistic Driving Studies (NDS). NDS are based on large scale data collection of driver, vehicle, and environment information in real world. NDS data sets have proven to be extremely valuable for the analysis of safety critical events such as crashes and near crashes. However, finding safety critical events in NDS data is often difficult and time consuming. Safety critical events are currently identified using kinematic triggers, for instance searching for deceleration below a certain threshold signifying harsh braking. Due to the low sensitivity and specificity of this filtering procedure, manual review of video data is currently necessary to decide whether the events identified by the triggers are actually safety critical. Such reviewing procedure is based on subjective decisions, is expensive and time consuming, and often tedious for the analysts. Furthermore, since NDS data is exponentially growing over time, this reviewing procedure may not be viable anymore in the very near future. This study tested the hypothesis that automatic processing of driver video information could increase the correct classification of safety critical events from kinematic triggers in naturalistic driving data. Review of about 400 video sequences recorded from the events, collected by 100 Volvo cars in the euroFOT project, suggested that drivers' individual reaction may be the key to recognize safety critical events. In fact, whether an event is safety critical or not often depends on the individual driver. A few algorithms, able to automatically classify driver reaction from video data, have been compared. The results presented in this paper show that the state of the art subjective review procedures to identify safety critical events from NDS can benefit from automated objective video processing. In addition, this paper discusses the major challenges in making such video analysis viable for future NDS and new potential

  13. Short-term effects of prosocial video games on aggression: an event-related potential study

    Science.gov (United States)

    Liu, Yanling; Teng, Zhaojun; Lan, Haiying; Zhang, Xin; Yao, Dezhong

    2015-01-01

    Previous research has shown that exposure to violent video games increases aggression, whereas exposure to prosocial video games can reduce aggressive behavior. However, little is known about the neural correlates of these behavioral effects. This work is the first to investigate the electrophysiological features of the relationship between playing a prosocial video game and inhibition of aggressive behavior. Forty-nine subjects played either a prosocial or a neutral video game for 20 min, then participated in an event-related potential (ERP) experiment based on an oddball paradigm and designed to test electrophysiological responses to prosocial and violent words. Finally, subjects completed a competitive reaction time task (CRTT) which based on Taylor's Aggression Paradigm and contains reaction time and noise intensity chosen as a measure of aggressive behavior. The results show that the prosocial video game group (compared to the neutral video game group) displayed smaller P300 amplitudes, were more accurate in distinguishing violent words, and were less aggressive as evaluated by the CRTT of noise intensity chosen. A mediation analysis shows that the P300 amplitude evoked by violent words partially mediates the relationship between type of video game and subsequent aggressive behavior. The results support theories based on the General Learning Model. We provide converging behavioral and neural evidence that exposure to prosocial media may reduce aggression. PMID:26257620

  14. Short-term effects of prosocial video games on aggression: an event-related potential study.

    Science.gov (United States)

    Liu, Yanling; Teng, Zhaojun; Lan, Haiying; Zhang, Xin; Yao, Dezhong

    2015-01-01

    Previous research has shown that exposure to violent video games increases aggression, whereas exposure to prosocial video games can reduce aggressive behavior. However, little is known about the neural correlates of these behavioral effects. This work is the first to investigate the electrophysiological features of the relationship between playing a prosocial video game and inhibition of aggressive behavior. Forty-nine subjects played either a prosocial or a neutral video game for 20 min, then participated in an event-related potential (ERP) experiment based on an oddball paradigm and designed to test electrophysiological responses to prosocial and violent words. Finally, subjects completed a competitive reaction time task (CRTT) which based on Taylor's Aggression Paradigm and contains reaction time and noise intensity chosen as a measure of aggressive behavior. The results show that the prosocial video game group (compared to the neutral video game group) displayed smaller P300 amplitudes, were more accurate in distinguishing violent words, and were less aggressive as evaluated by the CRTT of noise intensity chosen. A mediation analysis shows that the P300 amplitude evoked by violent words partially mediates the relationship between type of video game and subsequent aggressive behavior. The results support theories based on the General Learning Model. We provide converging behavioral and neural evidence that exposure to prosocial media may reduce aggression.

  15. Short-Term Effects of Prosocial Video Games on Aggression: An Event-Related Potential Study

    Directory of Open Access Journals (Sweden)

    Yanling eLiu

    2015-07-01

    Full Text Available Previous research has shown that exposure to violent video games increases aggression, whereas exposure to prosocial video games can reduce aggressive behavior. However, little is known about the neural correlates of these behavioral effects. This work is the first to investigate the electrophysiological features of the relationship between playing a prosocial video game and inhibition of aggressive behavior. Forty-nine subjects played either a prosocial or a neutral video game for 20 minutes, then participated in an event-related potential (ERP experiment based on an oddball paradigm and designed to test electrophysiological responses to prosocial and violent words. Finally, subjects completed a competitive reaction time task (CRTT, which is based on Taylor’s Aggression Paradigm and measures both reaction time and noise intensity preference as indices of aggressive behavior. The results show that the prosocial video game group (compared to the neutral video game group displayed smaller P300 amplitudes, were more accurate in distinguishing violent words, and were less aggressive as evaluated by the CRTT (noise intensity preference. A mediation analysis shows that the P300 amplitude evoked by violent words partially mediates the relationship between type of video game and subsequent aggressive behavior. The results support theories based on the General Learning Model. We provide converging behavioral and neural evidence that exposure to prosocial media may reduce aggression.

  16. Detecting imperceptible movements in structures by means of video magnification

    Science.gov (United States)

    Ordóñez, Celestino; Cabo, Carlos; García-Cortés, Silverio; Menéndez, Agustín.

    2017-06-01

    The naked eye is not able to perceive very slow movements such as those occurring in certain structures under external forces. This might be the case of metallic or concrete bridges, tower cranes or steel beams. However, sometimes it is of interest to view such movements, since they can provide useful information regarding the mechanical state of those structures. In this work, we analyze the utility of video magnification to detect imperceptible movements in several types of structures. First, laboratory experiments were conducted to validate the method. Then, two different tests were carried out on real structures: one on a water slide and another on a tower crane. The results obtained allow us to conclude that image cross-correlation and video magnification is indeed a promising low-cost technique for structure health monitoring.

  17. Semantic Concept Discovery for Large Scale Zero Shot Event Detection

    Science.gov (United States)

    2015-07-25

    for the event Rock climbing . From top to below are retrieved videos by selected concepts vocabu- lary, bi-concepts vocabulary, OR-composite concept...significantly improves on some events, such as Birthday party (E006), Flash mob gathering (E008) and Rock climbing (E027). For these events, the de- tection...concepts of the pro- posed method, we find that their classifiers are very discrimi- native and reliable. For instance, for the event Rock climbing we

  18. Design and Implementation of Video Shot Detection on Field Programmable Gate Arrays

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-09-01

    Full Text Available Video has become an interactive medium of communication in everyday life. The sheer volume of video makes it extremely difficult to browse through and find the required data. Hence extraction of key frames from the video which represents the abstract of the entire video becomes necessary. The aim of the video shot detection is to find the position of the shot boundaries, so that key frames can be selected from each shot for subsequent processing such as video summarization, indexing etc. For most of the surveillance applications like video summery, face recognition etc., the hardware (real time implementation of these algorithms becomes necessary. Here in this paper we present the architecture for simultaneous accessing of consecutive frames, which are then used for the implementation of various Video Shot Detection algorithms. We also present the real time implementation of three video shot detection algorithms using the above mentioned architecture on FPGA (Field Programmable Gate Arrays.

  19. Feature Extraction in IR Images Via Synchronous Video Detection

    Science.gov (United States)

    Shepard, Steven M.; Sass, David T.

    1989-03-01

    IR video images acquired by scanning imaging radiometers are subject to several problems which make measurement of small temperature differences difficult. Among these problems are 1) aliasing, which occurs When events at frequencies higher than the video frame rate are observed, 2) limited temperature resolution imposed by the 3-bit digitization available in existing commercial systems, and 3) susceptibility to noise and background clutter. Bandwidth narrowing devices (e.g. lock-in amplifiers or boxcar averagers) are routinely used to achieve a high degree of signal to noise improvement for time-varying 1-dimensional signals. We will describe techniques which allow similar S/N improvement for 2-dimensional imagery acquired with an off the shelf scanning imaging radiometer system. These techniques are iplemented in near-real-time, utilizing a microcomputer and specially developed hardware and software . We will also discuss the application of the system to feature extraction in cluttered images, and to acquisition of events which vary faster than the frame rate.

  20. Detection of upscale-crop and partial manipulation in surveillance video based on sensor pattern noise

    National Research Council Canada - National Science Library

    Hyun, Dai-Kyung; Ryu, Seung-Jin; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-01-01

    .... Nevertheless, there is little research being done on forgery of surveillance videos. This paper proposes a forensic technique to detect forgeries of surveillance video based on sensor pattern noise (SPN...

  1. Optimizing a neural network for detection of moving vehicles in video

    Science.gov (United States)

    Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri

    2017-10-01

    In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.

  2. Automatic Emotional State Detection using Facial Expression Dynamic in Videos

    Directory of Open Access Journals (Sweden)

    Hongying Meng

    2014-11-01

    Full Text Available In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems.

  3. A baseline algorithm for face detection and tracking in video

    Science.gov (United States)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  4. Detection of transient events on planetary bodies .

    Science.gov (United States)

    Di Martino, M.; Carbognani, A.

    Transient phenomena on planetary bodies are defined as luminous events of different intensities, which occur in planetary atmospheres and surfaces, their duration spans from about 0.1 s to some hours. They consist of meteors, bolides, lightning, impact flashes on solid surfaces, auroras, etc. So far, the study of these phenomena has been very limited, due to the lack of an ad hoc instrumentation, and their detection has been performed mainly on a serendipitous basis. Recently, ESA has issued an announcement of opportunity for the development of systems devoted to the detection of transient events in the Earth atmosphere and/or on the dark side of other planetary objects. One of such a detector as been designed and a prototype (\\textit{Smart Panoramic Optical Sensor Head}, SPOSH) has been constructed at Galileo Avionica S.p.A (Florence, Italy). For sake of clarity, in what follows, we classify the transient phenomena in ``Earth phenomena'' and ``Planetary phenomena'', even though some of them originate in a similar physical context.

  5. Stable hyper-pooling and query expansion for event detection

    OpenAIRE

    Douze, Matthijs; Revaud, Jerome; Schmid, Cordelia; Jegou, Herve

    2013-01-01

    International audience; This paper makes two complementary contributions to event retrieval in large collections of videos. First, we propose hyper-pooling strategies that encode the frame descriptors into a representation of the video sequence in a stable manner. Our best choices compare favorably with regular pooling techniques based on k-means quantization. Second, we introduce a technique to improve the ranking. It can be interpreted either as a query expansion method or as a similarity a...

  6. An Indoor Video Surveillance System with Intelligent Fall Detection Capability

    Directory of Open Access Journals (Sweden)

    Ming-Chih Chen

    2013-01-01

    Full Text Available This work presents a novel indoor video surveillance system, capable of detecting the falls of humans. The proposed system can detect and evaluate human posture as well. To evaluate human movements, the background model is developed using the codebook method, and the possible position of moving objects is extracted using the background and shadow eliminations method. Extracting a foreground image produces more noise and damage in this image. Additionally, the noise is eliminated using morphological and size filters and this damaged image is repaired. When the image object of a human is extracted, whether or not the posture has changed is evaluated using the aspect ratio and height of a human body. Meanwhile, the proposed system detects a change of the posture and extracts the histogram of the object projection to represent the appearance. The histogram becomes the input vector of K-Nearest Neighbor (K-NN algorithm and is to evaluate the posture of the object. Capable of accurately detecting different postures of a human, the proposed system increases the fall detection accuracy. Importantly, the proposed method detects the posture using the frame ratio and the displacement of height in an image. Experimental results demonstrate that the proposed system can further improve the system performance and the fall down identification accuracy.

  7. Physical models for moving shadow and object detection in video.

    Science.gov (United States)

    Nadimi, Sohail; Bhanu, Bir

    2004-08-01

    Current moving object detection systems typically detect shadows cast by the moving object as part of the moving object. In this paper, the problem of separating moving cast shadows from the moving objects in an outdoor environment is addressed. Unlike previous work, we present an approach that does not rely on any geometrical assumptions such as camera location and ground surface/object geometry. The approach is based on a new spatio-temporal albedo test and dichromatic reflection model and accounts for both the sun and the sky illuminations. Results are presented for several video sequences representing a variety of ground materials when the shadows are cast on different surface types. These results show that our approach is robust to widely different background and foreground materials, and illuminations.

  8. Detection and Recognition of Abnormal Running Behavior in Surveillance Video

    Directory of Open Access Journals (Sweden)

    Ying-Ying Zhu

    2012-01-01

    Full Text Available Abnormal running behavior frequently happen in robbery cases and other criminal cases. In order to identity these abnormal behaviors a method to detect and recognize abnormal running behavior, is presented based on spatiotemporal parameters. Meanwhile, to obtain more accurate spatiotemporal parameters and improve the real-time performance of the algorithm, a multitarget tracking algorithm, based on the intersection area among the minimum enclosing rectangle of the moving objects, is presented. The algorithm can judge and exclude effectively the intersection of multitarget and the interference, which makes the tracking algorithm more accurate and of better robustness. Experimental results show that the combination of these two algorithms can detect and recognize effectively the abnormal running behavior in surveillance videos.

  9. A video surveillance system designed to detect multiple falls

    Directory of Open Access Journals (Sweden)

    Ming-Chih Chen

    2016-04-01

    Full Text Available This work presents a fall detection system that is based on image processing technology. The system can detect falling by various humans via analysis of video frame. First, the system utilizes the method of mixture and Gaussian background model to generate information about the background, and the noise and shadow of background are eliminated to extract the possible positions of moving objects. The extraction of a foreground image generates more noise and damage. Therefore, morphological and size filters are utilized to eliminate this noise and repair the damage to the image. Extraction of the foreground image yields the locations of human heads in the image. The median point, height, and aspect ratio of the people in the image are calculated. These characteristics are utilized to trace objects. The change of the characteristics of objects among various consecutive images can be used to evaluate those persons enter or leave the scene. The method of fall detection uses the height and aspect ratio of the human body, analyzes the image in which one person overlaps with another, and detects whether a human has fallen or not. Experimental results demonstrate that the proposed method can efficiently detect falls by multiple persons.

  10. Video Analysis Verification of Head Impact Events Measured by Wearable Sensors.

    Science.gov (United States)

    Cortes, Nelson; Lincoln, Andrew E; Myer, Gregory D; Hepburn, Lisa; Higgins, Michael; Putukian, Margot; Caswell, Shane V

    2017-08-01

    Wearable sensors are increasingly used to quantify the frequency and magnitude of head impact events in multiple sports. There is a paucity of evidence that verifies head impact events recorded by wearable sensors. To utilize video analysis to verify head impact events recorded by wearable sensors and describe the respective frequency and magnitude. Cohort study (diagnosis); Level of evidence, 2. Thirty male (mean age, 16.6 ± 1.2 years; mean height, 1.77 ± 0.06 m; mean weight, 73.4 ± 12.2 kg) and 35 female (mean age, 16.2 ± 1.3 years; mean height, 1.66 ± 0.05 m; mean weight, 61.2 ± 6.4 kg) players volunteered to participate in this study during the 2014 and 2015 lacrosse seasons. Participants were instrumented with GForceTracker (GFT; boys) and X-Patch sensors (girls). Simultaneous game video was recorded by a trained videographer using a single camera located at the highest midfield location. One-third of the field was framed and panned to follow the ball during games. Videographic and accelerometer data were time synchronized. Head impact counts were compared with video recordings and were deemed valid if (1) the linear acceleration was ≥20 g, (2) the player was identified on the field, (3) the player was in camera view, and (4) the head impact mechanism could be clearly identified. Descriptive statistics of peak linear acceleration (PLA) and peak rotational velocity (PRV) for all verified head impacts ≥20 g were calculated. For the boys, a total recorded 1063 impacts (2014: n = 545; 2015: n = 518) were logged by the GFT between game start and end times (mean PLA, 46 ± 31 g; mean PRV, 1093 ± 661 deg/s) during 368 player-games. Of these impacts, 690 were verified via video analysis (65%; mean PLA, 48 ± 34 g; mean PRV, 1242 ± 617 deg/s). The X-Patch sensors, worn by the girls, recorded a total 180 impacts during the course of the games, and 58 (2014: n = 33; 2015: n = 25) were verified via video analysis (32%; mean PLA, 39 ± 21 g; mean PRV, 1664

  11. Automated Video Detection of Epileptic Convulsion Slowing as a Precursor for Post-Seizure Neuronal Collapse.

    Science.gov (United States)

    Kalitzin, Stiliyan N; Bauer, Prisca R; Lamberts, Robert J; Velis, Demetrios N; Thijs, Roland D; Lopes Da Silva, Fernando H

    2016-12-01

    Automated monitoring and alerting for adverse events in people with epilepsy can provide higher security and quality of life for those who suffer from this debilitating condition. Recently, we found a relation between clonic slowing at the end of a convulsive seizure (CS) and the occurrence and duration of a subsequent period of postictal generalized EEG suppression (PGES). Prolonged periods of PGES can be predicted by the amount of progressive increase of interclonic intervals (ICIs) during the seizure. The purpose of the present study is to develop an automated, remote video sensing-based algorithm for real-time detection of significant clonic slowing that can be used to alert for PGES. This may help preventing sudden unexpected death in epilepsy (SUDEP). The technique is based on our previously published optical flow video sequence processing paradigm that was applied for automated detection of major motor seizures. Here, we introduce an integral Radon-like transformation on the time-frequency wavelet spectrum to detect log-linear frequency changes during the seizure. We validate the automated detection and quantification of the ICI increase by comparison to the results from manually processed electroencephalography (EEG) traces as "gold standard". We studied 48 cases of convulsive seizures for which synchronized EEG-video recordings were available. In most cases, the spectral ridges obtained from Gabor-wavelet transformations of the optical flow group velocities were in close proximity to the ICI traces detected manually from EEG data during the seizure. The quantification of the slowing-down effect measured by the dominant angle in the Radon transformed spectrum was significantly correlated with the exponential ICI increase factors obtained from manual detection. If this effect is validated as a reliable precursor of PGES periods that lead to or increase the probability of SUDEP, the proposed method would provide an efficient alerting device.

  12. Design of an online video edge detection device for bottle caps based on FPGA

    OpenAIRE

    Donghui LIU; Lina TONG; Jiashuo WANG; Xiaoyun SUN; Xiaoying ZUO; Yakun DU; Zhenzhou WANG

    2015-01-01

    An online video edge detection device for bottle caps is designed and implemented using OV7670 video module and FPGA based control unit. By Verilog language programming, the device realizes the menu type parametric setting of the external VGA display, and completes the Roberts edge detection of real-time video image, which improves the speed of image processing. By improving the detection algorithm, the noise is effectively suppressed, and clear and coherent edge images are derived. The desig...

  13. Cross-domain active learning for video concept detection

    Science.gov (United States)

    Li, Huan; Li, Chao; Shi, Yuan; Xiong, Zhang; Hauptmann, Alexander G.

    2011-08-01

    As video data from a variety of different domains (e.g., news, documentaries, entertainment) have distinctive data distributions, cross-domain video concept detection becomes an important task, in which one can reuse the labeled data of one domain to benefit the learning task in another domain with insufficient labeled data. In this paper, we approach this problem by proposing a cross-domain active learning method which iteratively queries labels of the most informative samples in the target domain. Traditional active learning assumes that the training (source domain) and test data (target domain) are from the same distribution. However, it may fail when the two domains have different distributions because querying informative samples according to a base learner that initially learned from source domain may no longer be helpful for the target domain. In our paper, we use the Gaussian random field model as the base learner which has the advantage of exploring the distributions in both domains, and adopt uncertainty sampling as the query strategy. Additionally, we present an instance weighting trick to accelerate the adaptability of the base learner, and develop an efficient model updating method which can significantly speed up the active learning process. Experimental results on TRECVID collections highlight the effectiveness.

  14. Automatic detection of artifacts in converted S3D video

    Science.gov (United States)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  15. Motion Pattern Extraction and Event Detection for Automatic Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Benabbas Yassine

    2011-01-01

    Full Text Available Efficient analysis of human behavior in video surveillance scenes is a very challenging problem. Most traditional approaches fail when applied in real conditions and contexts like amounts of persons, appearance ambiguity, and occlusion. In this work, we propose to deal with this problem by modeling the global motion information obtained from optical flow vectors. The obtained direction and magnitude models learn the dominant motion orientations and magnitudes at each spatial location of the scene and are used to detect the major motion patterns. The applied region-based segmentation algorithm groups local blocks that share the same motion direction and speed and allows a subregion of the scene to appear in different patterns. The second part of the approach consists in the detection of events related to groups of people which are merge, split, walk, run, local dispersion, and evacuation by analyzing the instantaneous optical flow vectors and comparing the learned models. The approach is validated and experimented on standard datasets of the computer vision community. The qualitative and quantitative results are discussed.

  16. Autonomous Gait Event Detection with Portable Single-Camera Gait Kinematics Analysis System

    Directory of Open Access Journals (Sweden)

    Cheng Yang

    2016-01-01

    Full Text Available Laboratory-based nonwearable motion analysis systems have significantly advanced with robust objective measurement of the limb motion, resulting in quantified, standardized, and reliable outcome measures compared with traditional, semisubjective, observational gait analysis. However, the requirement for large laboratory space and operational expertise makes these systems impractical for gait analysis at local clinics and homes. In this paper, we focus on autonomous gait event detection with our bespoke, relatively inexpensive, and portable, single-camera gait kinematics analysis system. Our proposed system includes video acquisition with camera calibration, Kalman filter + Structural-Similarity-based marker tracking, autonomous knee angle calculation, video-frame-identification-based autonomous gait event detection, and result visualization. The only operational effort required is the marker-template selection for tracking initialization, aided by an easy-to-use graphic user interface. The knee angle validation on 10 stroke patients and 5 healthy volunteers against a gold standard optical motion analysis system indicates very good agreement. The autonomous gait event detection shows high detection rates for all gait events. Experimental results demonstrate that the proposed system can automatically measure the knee angle and detect gait events with good accuracy and thus offer an alternative, cost-effective, and convenient solution for clinical gait kinematics analysis.

  17. Detecting and extracting identifiable information from vehicles in videos

    Science.gov (United States)

    Roheda, Siddharth; Kalva, Hari; Naik, Mehul

    2015-03-01

    This paper presents a system to detect and extract identifiable information such as license plates, make, model, color, and bumper stickers present on vehicles. The goal of this work is to develop a system that automatically describes a vehicle just as a person would. This information can be used to improve traffic surveillance systems. The presented solution relies on efficient segmentation and structure of license plates to identify and extract information from vehicles. The system was evaluated on videos captures on Florida highways and is expected to work in other regions with little or no modifications. Results show that license plate was successfully segmented 92% of the cases, the make and the model of the car were segmented out and in 93% of the cases and bumper stickers were segmented in 92.5% of the cases. Over all recognition accuracy was 87%.

  18. Fault detection based on microseismic events

    Science.gov (United States)

    Yin, Chen

    2017-09-01

    In unconventional reservoirs, small faults allow the flow of oil and gas as well as act as obstacles to exploration; for, (1) fracturing facilitates fluid migration, (2) reservoir flooding, and (3) triggering of small earthquakes. These small faults are not generally detected because of the low seismic resolution. However, such small faults are very active and release sufficient energy to initiate a large number of microseismic events (MEs) during hydraulic fracturing. In this study, we identified microfractures (MF) from hydraulic fracturing and natural small faults based on microseismicity characteristics, such as the time-space distribution, source mechanism, magnitude, amplitude, and frequency. First, I identified the mechanism of small faults and MF by reservoir stress analysis and calibrated the ME based on the microseismic magnitude. The dynamic characteristics (frequency and amplitude) of MEs triggered by natural faults and MF were analyzed; moreover, the geometry and activity types of natural fault and MF were grouped according to the source mechanism. Finally, the differences among time-space distribution, magnitude, source mechanism, amplitude, and frequency were used to differentiate natural faults and manmade fractures.

  19. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    Science.gov (United States)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  20. Event Coverage Detection and Event Source Determination in Underwater Wireless Sensor Networks

    OpenAIRE

    Zhangbing Zhou; Riliang Xing; Yucong Duan; Yueqin Zhu; Jianming Xiang

    2015-01-01

    With the advent of the Internet of Underwater Things, smart things are deployed in the ocean space and establish underwater wireless sensor networks for the monitoring of vast and dynamic underwater environments. When events are found to have possibly occurred, accurate event coverage should be detected, and potential event sources should be determined for the enactment of prompt and proper responses. To address this challenge, a technique that detects event coverage and determines event sour...

  1. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  2. Detection of bubble nucleation event in superheated drop detector ...

    Indian Academy of Sciences (India)

    The present work demonstrates the detection of bubble nucleation events by using the pressure sensor. The associated circuits for the measurement are described in this article. The detection of events is verified by measuring the events with the acoustic sensor. The measurement was done using drops of various sizes to ...

  3. Comparative study of motion detection methods for video surveillance systems

    Science.gov (United States)

    Sehairi, Kamal; Chouireb, Fatima; Meunier, Jean

    2017-03-01

    The objective of this study is to compare several change detection methods for a monostatic camera and identify the best method for different complex environments and backgrounds in indoor and outdoor scenes. To this end, we used the CDnet video dataset as a benchmark that consists of many challenging problems, ranging from basic simple scenes to complex scenes affected by bad weather and dynamic backgrounds. Twelve change detection methods, ranging from simple temporal differencing to more sophisticated methods, were tested and several performance metrics were used to precisely evaluate the results. Because most of the considered methods have not previously been evaluated on this recent large scale dataset, this work compares these methods to fill a lack in the literature, and thus this evaluation joins as complementary compared with the previous comparative evaluations. Our experimental results show that there is no perfect method for all challenging cases; each method performs well in certain cases and fails in others. However, this study enables the user to identify the most suitable method for his or her needs.

  4. Non-Linguistic Vocal Event Detection Using Online Random

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll

    2014-01-01

    Accurate detection of non-linguistic vocal events in social signals can have a great impact on the applicability of speech enabled interactive systems. In this paper, we investigate the use of random forest for vocal event detection. Random forest technique has been successfully employed in many...... areas such as object detection, face recognition, and audio event detection. This paper proposes to use online random forest technique for detecting laughter and filler and for analyzing the importance of various features for non-linguistic vocal event classification through permutation. The results...

  5. Monkeying around with the Gorillas in Our Midst: Familiarity with an Inattentional-Blindness Task Does Not Improve the Detection of Unexpected Events

    Directory of Open Access Journals (Sweden)

    Daniel J Simons

    2010-04-01

    Full Text Available When people know to look for an unexpected event (eg, a gorilla in a basketball game, they tend to notice that event. But does knowledge that an unexpected event might occur improve the detection of other unexpected events in a similar scene? Subjects watched a new video in which, in addition to the gorilla, two other unexpected events occurred: a curtain changed color, and one player left the scene. Subjects who knew about videos like this one consistently spotted the gorilla in the new video, but they were slightly less likely to notice the other events. Foreknowledge that unexpected events might occur does not enhance the ability to detect other such events.

  6. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2013-01-01

    Full Text Available A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI. The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding.

  7. Action and Event Recognition in Videos by Learning From Heterogeneous Web Sources.

    Science.gov (United States)

    Niu, Li; Xu, Xinxing; Chen, Lin; Duan, Lixin; Xu, Dong

    2017-06-01

    In this paper, we propose new approaches for action and event recognition by leveraging a large number of freely available Web videos (e.g., from Flickr video search engine) and Web images (e.g., from Bing and Google image search engines). We address this problem by formulating it as a new multi-domain adaptation problem, in which heterogeneous Web sources are provided. Specifically, we are given different types of visual features (e.g., the DeCAF features from Bing/Google images and the trajectory-based features from Flickr videos) from heterogeneous source domains and all types of visual features from the target domain. Considering the target domain is more relevant to some source domains, we propose a new approach named multi-domain adaptation with heterogeneous sources (MDA-HS) to effectively make use of the heterogeneous sources. In MDA-HS, we simultaneously seek for the optimal weights of multiple source domains, infer the labels of target domain samples, and learn an optimal target classifier. Moreover, as textual descriptions are often available for both Web videos and images, we propose a novel approach called MDA-HS using privileged information (MDA-HS+) to effectively incorporate the valuable textual information into our MDA-HS method, based on the recent learning using privileged information paradigm. MDA-HS+ can be further extended by using a new elastic-net-like regularization. We solve our MDA-HS and MDA-HS+ methods by using the cutting-plane algorithm, in which a multiple kernel learning problem is derived and solved. Extensive experiments on three benchmark data sets demonstrate that our proposed approaches are effective for action and event recognition without requiring any labeled samples from the target domain.

  8. A change detection approach to moving object detection in low frame-rate video

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Harvey, Neal R [Los Alamos National Laboratory; Theiler, James P [Los Alamos National Laboratory

    2009-01-01

    Moving object detection is of significant interest in temporal image analysis since it is a first step in many object identification and tracking applications. A key component in almost all moving object detection algorithms is a pixel-level classifier, where each pixel is predicted to be either part of a moving object or part of the background. In this paper we investigate a change detection approach to the pixel-level classification problem and evaluate its impact on moving object detection. The change detection approach that we investigate was previously applied to multi-and hyper-spectral datasets, where images were typically taken several days, or months apart. In this paper, we apply the approach to low-frame rate (1-2 frames per second) video datasets.

  9. Segmentation Based Video Steganalysis to Detect Motion Vector Modification

    Directory of Open Access Journals (Sweden)

    Peipei Wang

    2017-01-01

    Full Text Available This paper presents a steganalytic approach against video steganography which modifies motion vector (MV in content adaptive manner. Current video steganalytic schemes extract features from fixed-length frames of the whole video and do not take advantage of the content diversity. Consequently, the effectiveness of the steganalytic feature is influenced by video content and the problem of cover source mismatch also affects the steganalytic performance. The goal of this paper is to propose a steganalytic method which can suppress the differences of statistical characteristics caused by video content. The given video is segmented to subsequences according to block’s motion in every frame. The steganalytic features extracted from each category of subsequences with close motion intensity are used to build one classifier. The final steganalytic result can be obtained by fusing the results of weighted classifiers. The experimental results have demonstrated that our method can effectively improve the performance of video steganalysis, especially for videos of low bitrate and low embedding ratio.

  10. Simultaneous video stabilization and moving object detection in turbulence.

    Science.gov (United States)

    Oreifej, Omar; Li, Xin; Shah, Mubarak

    2013-02-01

    Turbulence mitigation refers to the stabilization of videos with nonuniform deformations due to the influence of optical turbulence. Typical approaches for turbulence mitigation follow averaging or dewarping techniques. Although these methods can reduce the turbulence, they distort the independently moving objects, which can often be of great interest. In this paper, we address the novel problem of simultaneous turbulence mitigation and moving object detection. We propose a novel three-term low-rank matrix decomposition approach in which we decompose the turbulence sequence into three components: the background, the turbulence, and the object. We simplify this extremely difficult problem into a minimization of nuclear norm, Frobenius norm, and l1 norm. Our method is based on two observations: First, the turbulence causes dense and Gaussian noise and therefore can be captured by Frobenius norm, while the moving objects are sparse and thus can be captured by l1 norm. Second, since the object's motion is linear and intrinsically different from the Gaussian-like turbulence, a Gaussian-based turbulence model can be employed to enforce an additional constraint on the search space of the minimization. We demonstrate the robustness of our approach on challenging sequences which are significantly distorted with atmospheric turbulence and include extremely tiny moving objects.

  11. Small Moving Vehicle Detection in a Satellite Video of an Urban Area

    OpenAIRE

    Tao Yang; Xiwen Wang,; Bowei Yao; Jing Li; Yanning Zhang; Zhannan He; Wencheng Duan

    2016-01-01

    Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels ...

  12. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  13. Modeling Concept Dependencies for Event Detection

    Science.gov (United States)

    2014-04-04

    late fusion– as a competi- tor to early and late fusion. Althoff et al. [1], Izidinia and Shah [7], and Habibian et al. [5] focus on event recognition...and Harpreet Sawhney- for providing us the rank lists for DTF-HOG, DTF-MBH, and STIP. 8. REFERENCES [1] T. Althoff , H. O. Song, and T. Darrell

  14. Gaussian Process Regression-Based Video Anomaly Detection and Localization With Hierarchical Feature Representation.

    Science.gov (United States)

    Cheng, Kai-Wen; Chen, Yie-Tarng; Fang, Wen-Hsien

    2015-12-01

    This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression (GPR) which is fully non-parametric and robust to the noisy training data, and supports sparse features. While most research on anomaly detection has focused more on detecting local anomalies, we are more interested in global anomalies that involve multiple normal events interacting in an unusual manner, such as car accidents. To simultaneously detect local and global anomalies, we cast the extraction of normal interactions from the training videos as a problem of finding the frequent geometric relations of the nearby sparse spatio-temporal interest points (STIPs). A codebook of interaction templates is then constructed and modeled using the GPR, based on which a novel inference method for computing the likelihood of an observed interaction is also developed. Thereafter, these local likelihood scores are integrated into globally consistent anomaly masks, from which anomalies can be succinctly identified. To the best of our knowledge, it is the first time GPR is employed to model the relationship of the nearby STIPs for anomaly detection. Simulations based on four widespread datasets show that the new method outperforms the main state-of-the-art methods with lower computational burden.

  15. Design of an online video edge detection device for bottle caps based on FPGA

    Directory of Open Access Journals (Sweden)

    Donghui LIU

    2015-06-01

    Full Text Available An online video edge detection device for bottle caps is designed and implemented using OV7670 video module and FPGA based control unit. By Verilog language programming, the device realizes the menu type parametric setting of the external VGA display, and completes the Roberts edge detection of real-time video image, which improves the speed of image processing. By improving the detection algorithm, the noise is effectively suppressed, and clear and coherent edge images are derived. The design improves the working environment, and avoids the harm to human body.

  16. Zero shot Event Detection using Multi modal Fusion of Weakly Supervised Concepts (Open Access)

    Science.gov (United States)

    2014-09-25

    speech activity detection (SAD) and a hidden Markov model (HMM) based multi-pass large vocabulary ASR to obtain speech content in the video, and encode...average the detection scores across the video to get the final video- level feature vector. 4.5. Automatic Speech Recognition (ASR) We use GMM-based...a speech activity detection (SAD) system that employs two GMMs, for speech and non- speech observations respectively. The SAD model incorporates video

  17. Detection of hypoglycemic events through wearable sensors

    OpenAIRE

    Ranvier, Jean-Eudes; Dubosson, Fabien; Calbimonte, Jean-Paul; Aberer, Karl

    2016-01-01

    Diabetic patients are dependent on external substances to balance their blood glucose level. In order to control this level, they historically needed to sample a drop a blood from their hand and have it analyzed. Recently, other directions emerged to offer alternative ways to estimate glucose level. In this paper, we present our ongoing work on a framework for inferring semantically annotated glycemic events on the patient, which leverages mobile wearable sensors on a sport-belt.

  18. Detecting surface events at the COBRA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Tebruegge, Jan [Exp. Physik IV, TU Dortmund (Germany); Collaboration: COBRA-Collaboration

    2015-07-01

    The aim of the COBRA experiment is to prove the existence of neutrinoless double-beta-decay and to measure its half-life. For this purpose the COBRA demonstrator, a prototype for a large-scale experiment, is operated at the Gran Sasso Underground Laboratory (LNGS) in Italy. The demonstrator is a detector array made of 64 Cadmium-Zinc-Telluride (CdZnTe) semiconductor detectors in the coplanar grid anode configuration. Each detector is 1**1 ccm in size. This setup is used to investigate the experimental issues of operating CdZnTe detectors in low background mode and identify potential background components. As the ''detector=source'' principle is used, the neutrinoless double beta decay COBRA searches for happens within the whole detector volume. Consequently, events on the surface of the detectors are considered as background. These surface events are a main background component, stemming mainly from the natural radioactivity, especially radon. This talk explains to what extent surface events occur and shows how these are recognized and vetoed in the analysis using pulse shape discrimination algorithms.

  19. Event Detection Using “Variable Module Graphs” for Home Care Applications

    Directory of Open Access Journals (Sweden)

    Thomas S. Huang

    2007-01-01

    Full Text Available Technology has reached new heights making sound and video capture devices ubiquitous and affordable. We propose a paradigm to exploit this technology for home care applications especially for surveillance and complex event detection. Complex vision tasks such as event detection in a surveillance video can be divided into subtasks such as human detection, tracking, recognition, and trajectory analysis. The video can be thought of as being composed of various features. These features can be roughly arranged in a hierarchy from low-level features to high-level features. Low-level features include edges and blobs, and high-level features include objects and events. Loosely, the low-level feature extraction is based on signal/image processing techniques, while the high-level feature extraction is based on machine learning techniques. Traditionally, vision systems extract features in a feed-forward manner on the hierarchy, that is, certain modules extract low-level features and other modules make use of these low-level features to extract high-level features. Along with others in the research community, we have worked on this design approach. In this paper, we elaborate on recently introduced V/M graph. We present our work on using this paradigm for developing applications for home care applications. Primary objective is surveillance of location for subject tracking as well as detecting irregular or anomalous behavior. This is done automatically with minimal human involvement, where the system has been trained to raise an alarm when anomalous behavior is detected.

  20. Factors influencing surgeons' intraoperative leadership: video analysis of unanticipated events in the operating room.

    Science.gov (United States)

    Parker, Sarah Henrickson; Flin, Rhona; McKinley, Aileen; Yule, Steven

    2014-01-01

    The achievement of surgical goals and the successful functioning of operating room (OR) teams are dependent on leadership. The attending surgeon is a team leader during an operation, with responsibility for task accomplishment by the clinical team. This study examined surgeons' leadership behaviors during surgical procedures, with particular reference to the effect of intraoperative events on leadership. Videos of operations (n = 29) recorded at three UK teaching hospitals were analyzed to identify and classify surgeons' intraoperative leadership behaviors using the Surgeons' Leadership Inventory. The frequency and type of leadership behaviors were compared before and after the point of no return (PONR) (n = 24), and during an unexpected intraoperative event (n = 5). Most of the surgeons' leadership behaviors were directed toward the resident during an operation. No significant differences were found for the overall number or type of leadership behaviors pre- and post-PONR. The frequency of leadership behaviors classified as "Training" and "Supporting others" significantly decreased during an unanticipated intraoperative event. Overall, surgeons engaged in the same leadership behaviors throughout the course of an operation unless they were dealing with an unanticipated event. Surgeons appeared to adopt a "one size fits all" leadership style approach regardless of the team or situation. Additionally, surgeons seemed to limit their intraoperative leadership focus to other surgeons rather than to the wider OR team.

  1. Abnormal Event Detection Using Local Sparse Representation

    DEFF Research Database (Denmark)

    Ren, Huamin; Moeslund, Thomas B.

    2014-01-01

    measurement based on the difference between the normal space and local space. Specifically, we provide a reasonable normal bases through repeated K spectral clustering. Then for each testing feature we first use temporal neighbors to form a local space. An abnormal event is found if any abnormal feature...... is found that satisfies: the distance between its local space and the normal space is large. We evaluate our method on two public benchmark datasets: UCSD and Subway Entrance datasets. The comparison to the state-of-the-art methods validate our method's effectiveness....

  2. Slow motion replay detection of tennis video based on color auto-correlogram

    Science.gov (United States)

    Zhang, Xiaoli; Zhi, Min

    2012-04-01

    In this paper, an effective slow motion replay detection method for tennis videos which contains logo transition is proposed. This method is based on the theory of color auto-correlogram and achieved by fowllowing steps: First,detect the candidate logo transition areas from the video frame sequence. Second, generate logo template. Then use color auto-correlogram for similarity matching between video frames and logo template in the candidate logo transition areas. Finally, select logo frames according to the matching results and locate the borders of slow motion accurately by using the brightness change during logo transition process. Experiment shows that, unlike previous approaches, this method has a great improvement in border locating accuracy rate, and can be used for other sports videos which have logo transition, too. In addition, as the algorithm only calculate the contents in the central area of the video frames, speed of the algorithm has been improved greatly.

  3. Fast Temporal Activity Proposals for Efficient Detection of Human Actions in Untrimmed Videos

    KAUST Repository

    Heilbron, Fabian Caba

    2016-12-13

    In many large-scale video analysis scenarios, one is interested in localizing and recognizing human activities that occur in short temporal intervals within long untrimmed videos. Current approaches for activity detection still struggle to handle large-scale video collections and the task remains relatively unexplored. This is in part due to the computational complexity of current action recognition approaches and the lack of a method that proposes fewer intervals in the video, where activity processing can be focused. In this paper, we introduce a proposal method that aims to recover temporal segments containing actions in untrimmed videos. Building on techniques for learning sparse dictionaries, we introduce a learning framework to represent and retrieve activity proposals. We demonstrate the capabilities of our method in not only producing high quality proposals but also in its efficiency. Finally, we show the positive impact our method has on recognition performance when it is used for action detection, while running at 10FPS.

  4. Subsurface Event Detection and Classification Using Wireless Signal Networks

    Directory of Open Access Journals (Sweden)

    Muhannad T. Suleiman

    2012-11-01

    Full Text Available Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs. The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events.

  5. Paroxysmal non-epileptic motor events in childhood: a clinical and video-EEG-polymyographic study.

    Science.gov (United States)

    Canavese, Carlotta; Canafoglia, Laura; Costa, Caterina; Zibordi, Federica; Zorzi, Giovanna; Binelli, Simona; Franceschetti, Silvana; Nardocci, Nardo

    2012-04-01

    The aim of this article was to describe the phenomenology and polymyographic features of paroxysmal non-epileptic motor events (PNMEs) observed in a series of typically developing and children with neurological impairment. We conducted a retrospective evaluation of 63 individuals (29 females; 34 males) affected by PNMEs at the National Neurological Institute 'C. Besta' between 2006 and 2008. Individuals were included in the study if they had PNMEs documented by a video-electroencephalography-polymyographic study and were aged between 1 month and 18 years (mean age at the time of video-electroencephalography-polymyography: 5y 10mo). In 45 of the 63 participants (71%), PNMEs were associated with other neurological conditions (secondary) including epilepsy, whereas in 18 participants PNME was the only neurological symptom (primary). Clinical features allowed classification of the motor disturbance into usual movement disorder categories in 31 individuals (49%); in the remaining 32 (51%), the movement disorder was characterized on the basis of polymyographic pattern of 'jerks' or 'sustained contraction'. The most frequent PNMEs were paroxysmal dyskinesias, followed by startle, stereotypies, shuddering, sleep myoclonus, psychogenic movement disorders, and benign myoclonus of early infancy; the last syndrome was also observed in children with neurological impairment. In eight participants, PNMEs remained unclassified. PNMEs may occur in both healthy and children with neurological impairment and are caused by a wide range of static and progressive conditions. In the majority of children with neurological impairment with associated epilepsy, the PNMEs do not fit into the usual movement disorders categories. A video-electroencephalography-polymyography is therefore useful for characterizing them. © The Authors. Developmental Medicine & Child Neurology © 2012 Mac Keith Press.

  6. Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models

    Directory of Open Access Journals (Sweden)

    Nouar AlDahoul

    2018-01-01

    Full Text Available Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN, pretrained CNN feature extractor, and hierarchical extreme learning machine for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running. Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM. H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU, H-ELM’s training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU.

  7. Lesson Plan Prototype for International Space Station's Interactive Video Education Events

    Science.gov (United States)

    Zigon, Thomas

    1999-01-01

    The outreach and education components of the International Space Station Program are creating a number of materials, programs, and activities that educate and inform various groups as to the implementation and purposes of the International Space Station. One of the strategies for disseminating this information to K-12 students involves an electronic class room using state of the art video conferencing technology. K-12 classrooms are able to visit the JSC, via an electronic field trip. Students interact with outreach personnel as they are taken on a tour of ISS mockups. Currently these events can be generally characterized as: Being limited to a one shot events, providing only one opportunity for students to view the ISS mockups; Using a "one to many" mode of communications; Using a transmissive, lecture based method of presenting information; Having student interactions limited to Q&A during the live event; Making limited use of media; and Lacking any formal, performance based, demonstration of learning on the part of students. My project involved developing interactive lessons for K-12 students (specifically 7th grade) that will reflect a 2nd generation design for electronic field trips. The goal of this design will be to create electronic field trips that will: Conform to national education standards; More fully utilize existing information resources; Integrate media into field trip presentations; Make support media accessible to both presenters and students; Challenge students to actively participate in field trip related activities; and Provide students with opportunities to demonstrate learning

  8. Video event data recording of a taxi driver used for diagnosis of epilepsy.

    Science.gov (United States)

    Sakurai, Kotaro; Yamamoto, Junko; Kurita, Tsugiko; Takeda, Youji; Kusumi, Ichiro

    2014-01-01

    A video event data recorder (VEDR) in a motor vehicle records images before and after a traffic accident. This report describes a taxi driver whose seizures were recorded by VEDR, which was extremely useful for the diagnosis of epilepsy. The patient was a 63-year-old right-handed Japanese male taxi driver. He collided with a streetlight. Two years prior to this incident, he raced an engine for a long time while parked. The VEDR enabled confirmation that the accidents depended on an epileptic seizure and he was diagnosed with symptomatic localization-related epilepsy. The VEDR is useful not only for traffic accident evidence; it might also contribute to a driver's health care and road safety.

  9. Video event data recording of a taxi driver used for diagnosis of epilepsy☆

    Science.gov (United States)

    Sakurai, Kotaro; Yamamoto, Junko; Kurita, Tsugiko; Takeda, Youji; Kusumi, Ichiro

    2014-01-01

    A video event data recorder (VEDR) in a motor vehicle records images before and after a traffic accident. This report describes a taxi driver whose seizures were recorded by VEDR, which was extremely useful for the diagnosis of epilepsy. The patient was a 63-year-old right-handed Japanese male taxi driver. He collided with a streetlight. Two years prior to this incident, he raced an engine for a long time while parked. The VEDR enabled confirmation that the accidents depended on an epileptic seizure and he was diagnosed with symptomatic localization-related epilepsy. The VEDR is useful not only for traffic accident evidence; it might also contribute to a driver's health care and road safety. PMID:25667862

  10. Video event data recording of a taxi driver used for diagnosis of epilepsy

    Directory of Open Access Journals (Sweden)

    Kotaro Sakurai

    2014-01-01

    Full Text Available A video event data recorder (VEDR in a motor vehicle records images before and after a traffic accident. This report describes a taxi driver whose seizures were recorded by VEDR, which was extremely useful for the diagnosis of epilepsy. The patient was a 63-year-old right-handed Japanese male taxi driver. He collided with a streetlight. Two years prior to this incident, he raced an engine for a long time while parked. The VEDR enabled confirmation that the accidents depended on an epileptic seizure and he was diagnosed with symptomatic localization-related epilepsy. The VEDR is useful not only for traffic accident evidence; it might also contribute to a driver's health care and road safety.

  11. A Study of Vehicle Detection and Counting System Based on Video

    Directory of Open Access Journals (Sweden)

    Shuang XU

    2014-10-01

    Full Text Available About the video image processing's vehicle detection and counting system research, which has video vehicle detection, vehicle targets' image processing, and vehicle counting function. Vehicle detection is the use of inter-frame difference method and vehicle shadow segmentation techniques for vehicle testing. Image processing functions is the use of color image gray processing, image segmentation, mathematical morphology analysis and image fills, etc. on target detection to be processed, and then the target vehicle extraction. Counting function is to count the detected vehicle. The system is the use of inter-frame video difference method to detect vehicle and the use of the method of adding frame to vehicle and boundary comparison method to complete the counting function, with high recognition rate, fast, and easy operation. The purpose of this paper is to enhance traffic management modernization and automation levels. According to this study, it can provide a reference for the future development of related applications.

  12. Visual sensor based abnormal event detection with moving shadow removal in home healthcare applications.

    Science.gov (United States)

    Lee, Young-Sook; Chung, Wan-Young

    2012-01-01

    Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities.

  13. Visual Sensor Based Abnormal Event Detection with Moving Shadow Removal in Home Healthcare Applications

    Directory of Open Access Journals (Sweden)

    Young-Sook Lee

    2012-01-01

    Full Text Available Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities.

  14. A System based on Adaptive Background Subtraction Approach for Moving Object Detection and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Bahadır KARASULU

    2013-04-01

    Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study

  15. Road user behaviour analyses based on video detections

    DEFF Research Database (Denmark)

    Agerholm, Niels; Tønning, Charlotte; Madsen, Tanja Kidholm Osmann

    2017-01-01

    has been developed. It works as a watchdog – if a passing road user affects defined part(s) of the video frame, RUBA records the time of the activity. It operates with three type of detectors (defined parts of the video frame): 1) if a road user passes the detector independent of the direction, 2......) if a road user passes the area in one pre-adjusted specific direction and 3) if a road user is standing still in the detector area. Also, RUBA can be adjusted so it registers massive entities (e.g. cars) while less massive ones (e.g. cyclists) are not registered. The software has been used for various...... analyses of traffic behaviour: traffic counts with and without removal of different modes of transportation, traffic conflicts, traffic behaviour for specific traffic flows and modes and comparisons of speeds in rebuilt road areas. While there is still space for improvement regarding data treatment speed...

  16. Video-based real-time on-street parking occupancy detection system

    Science.gov (United States)

    Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang

    2013-10-01

    Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.

  17. A Benchmark Dataset and Saliency-guided Stacked Autoencoders for Video-based Salient Object Detection.

    Science.gov (United States)

    Li, Jia; Xia, Changqun; Chen, Xiaowu

    2017-10-12

    Image-based salient object detection (SOD) has been extensively studied in past decades. However, video-based SOD is much less explored due to the lack of large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos. In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects who free-view all videos. From the user data, we find that salient objects in a video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object/region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for videobased salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliencyguided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at the pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are constructed in an unsupervised manner that automatically infers a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. In experiments, the proposed unsupervised approach is compared with 31 state-of-the-art models on the proposed dataset and outperforms 30 of them, including 19 imagebased classic (unsupervised or non-deep learning) models, six image-based deep learning models, and five video-based unsupervised models. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.

  18. Facial Video-Based Photoplethysmography to Detect HRV at Rest.

    Science.gov (United States)

    Moreno, J; Ramos-Castro, J; Movellan, J; Parrado, E; Rodas, G; Capdevila, L

    2015-06-01

    Our aim is to demonstrate the usefulness of photoplethysmography (PPG) for analyzing heart rate variability (HRV) using a standard 5-min test at rest with paced breathing, comparing the results with real RR intervals and testing supine and sitting positions. Simultaneous recordings of R-R intervals were conducted with a Polar system and a non-contact PPG, based on facial video recording on 20 individuals. Data analysis and editing were performed with individually designated software for each instrument. Agreement on HRV parameters was assessed with concordance correlations, effect size from ANOVA and Bland and Altman plots. For supine position, differences between video and Polar systems showed a small effect size in most HRV parameters. For sitting position, these differences showed a moderate effect size in most HRV parameters. A new procedure, based on the pixels that contained more heart beat information, is proposed for improving the signal-to-noise ratio in the PPG video signal. Results were acceptable in both positions but better in the supine position. Our approach could be relevant for applications that require monitoring of stress or cardio-respiratory health, such as effort/recuperation states in sports. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Amplitude Integrated Electroencephalography Compared With Conventional Video EEG for Neonatal Seizure Detection: A Diagnostic Accuracy Study.

    Science.gov (United States)

    Rakshasbhuvankar, Abhijeet; Rao, Shripada; Palumbo, Linda; Ghosh, Soumya; Nagarajan, Lakshmi

    2017-08-01

    This diagnostic accuracy study compared the accuracy of seizure detection by amplitude-integrated electroencephalography with the criterion standard conventional video EEG in term and near-term infants at risk of seizures. Simultaneous recording of amplitude-integrated EEG (2-channel amplitude-integrated EEG with raw trace) and video EEG was done for 24 hours for each infant. Amplitude-integrated EEG was interpreted by a neonatologist; video EEG was interpreted by a neurologist independently. Thirty-five infants were included in the analysis. In the 7 infants with seizures on video EEG, there were 169 seizure episodes on video EEG, of which only 57 were identified by amplitude-integrated EEG. Amplitude-integrated EEG had a sensitivity of 33.7% for individual seizure detection. Amplitude-integrated EEG had an 86% sensitivity for detection of babies with seizures; however, it was nonspecific, in that 50% of infants with seizures detected by amplitude-integrated EEG did not have true seizures by video EEG. In conclusion, our study suggests that amplitude-integrated EEG is a poor screening tool for neonatal seizures.

  20. Forest Fire Smoke Video Detection Using Spatiotemporal and Dynamic Texture Features

    Directory of Open Access Journals (Sweden)

    Yaqin Zhao

    2015-01-01

    Full Text Available Smoke detection is a very key part of fire recognition in a forest fire surveillance video since the smoke produced by forest fires is visible much before the flames. The performance of smoke video detection algorithm is often influenced by some smoke-like objects such as heavy fog. This paper presents a novel forest fire smoke video detection based on spatiotemporal features and dynamic texture features. At first, Kalman filtering is used to segment candidate smoke regions. Then, candidate smoke region is divided into small blocks. Spatiotemporal energy feature of each block is extracted by computing the energy features of its 8-neighboring blocks in the current frame and its two adjacent frames. Flutter direction angle is computed by analyzing the centroid motion of the segmented regions in one candidate smoke video clip. Local Binary Motion Pattern (LBMP is used to define dynamic texture features of smoke videos. Finally, smoke video is recognized by Adaboost algorithm. The experimental results show that the proposed method can effectively detect smoke image recorded from different scenes.

  1. The psychophysiology of James Bond: phasic emotional responses to violent video game events.

    Science.gov (United States)

    Ravaja, Niklas; Turpeinen, Marko; Saari, Timo; Puttonen, Sampsa; Keltikangas-Järvinen, Liisa

    2008-02-01

    The authors examined emotional valence- and arousal-related phasic psychophysiological responses to different violent events in the first-person shooter video game "James Bond 007: NightFire" among 36 young adults. Event-related changes in zygomaticus major, corrugator supercilii, and orbicularis oculi electromyographic (EMG) activity and skin conductance level (SCL) were recorded, and the participants rated their emotions and the trait psychoticism based on the Psychoticism dimension of the Eysenck Personality Questionnaire--Revised, Short Form. Wounding and killing the opponent elicited an increase in SCL and a decrease in zygomatic and orbicularis oculi EMG activity. The decrease in zygomatic and orbicularis oculi activity was less pronounced among high Psychoticism scorers compared with low Psychoticism scorers. The wounding and death of the player's own character (James Bond) elicited an increase in SCL and zygomatic and orbicularis oculi EMG activity and a decrease in corrugator activity. Instead of joy resulting from victory and success, wounding and killing the opponent may elicit high-arousal negative affect (anxiety), with high Psychoticism scorers experiencing less anxiety than low Psychoticism scorers. Although counterintuitive, the wounding and death of the player's own character may increase some aspect of positive emotion.

  2. VideoStory: A New Multimedia Embedding for Few Example Recognition and Translation of Events

    Science.gov (United States)

    2014-11-07

    This objective minimizes the quadratic error between the original video descriptions Y , and the reconstructed translations obtained from A and S...this purpose, we parse the grammatical structure of title captions using a probabilistic Figure 3: Terms from the VideoStory46K dataset occurring in...according to Eq. (2). Then the video embedding is learned separately, by minimizing the error of predicting the embedded descriptions from the videos

  3. Short-Term Effects of Prosocial Video Games on Aggression: An Event-Related Potential Study

    OpenAIRE

    Yanling eLiu; Yanling eLiu; Zhaojun eTeng; Haiying eLan; Xin eZhang; Dezhong eYao

    2015-01-01

    Previous research has shown that exposure to violent video games increases aggression, whereas exposure to prosocial video games can reduce aggressive behavior. However, little is known about the neural correlates of these behavioral effects. This work is the first to investigate the electrophysiological features of the relationship between playing a prosocial video game and inhibition of aggressive behavior. Forty-nine subjects played either a prosocial or a neutral video game for 20 minutes...

  4. Short-term effects of prosocial video games on aggression: an event-related potential study

    OpenAIRE

    Liu, Yanling; Teng, Zhaojun; Lan, Haiying; Zhang, Xin; Yao, Dezhong

    2015-01-01

    Previous research has shown that exposure to violent video games increases aggression, whereas exposure to prosocial video games can reduce aggressive behavior. However, little is known about the neural correlates of these behavioral effects. This work is the first to investigate the electrophysiological features of the relationship between playing a prosocial video game and inhibition of aggressive behavior. Forty-nine subjects played either a prosocial or a neutral video game for 20 min, th...

  5. Holistic quaternion vector convolution filter for RGB-depth video contour detection

    Science.gov (United States)

    Ti, Chunli; Xu, Guodong; Guan, Yudong; Teng, Yidan; Zhang, Ye

    2017-05-01

    A quaternion vector gradient filter is proposed for RGB-depth (RGB-D) video contour detection. First, a holistic quaternion vector system is introduced to synthetically express the color and depth information, by adding the depth to its scalar part. Then, a convolution differential operator for quaternion vector is proposed to highlight edges with both depth and chromatic variations but restrain the gradient of intensity term. In addition, the quaternion vector gradients are adaptively weighted utilizing depth confidence measure and the quadtree decomposition of the coding tree units in the video streaming. Results on the 3-D high-efficiency video coding test sequences and quantitative simulated experiments on Berkeley segmentation datasets both indicate the availability of the proposed gradient-based method on detecting the semantic contour of the RGB-D videos.

  6. Graph clustering for weapon discharge event detection and tracking in infrared imagery using deep features

    Science.gov (United States)

    Bhattacharjee, Sreyasee Das; Talukder, Ashit

    2017-05-01

    This paper addresses the problem of detecting and tracking weapon discharge event in an Infrared Imagery collection. While most of the prior work in related domains exploits the vast amount of complementary in- formation available from both visible-band (EO) and Infrared (IR) image (or video sequences), we handle the problem of recognizing human pose and activity detection exclusively in thermal (IR) images or videos. The task is primarily two-fold: 1) locating the individual in the scene from IR imagery, and 2) identifying the correct pose of the human individual (i.e. presence or absence of weapon discharge activity or intent). An efficient graph-based shortlisting strategy for identifying candidate regions of interest in the IR image utilizes both image saliency and mutual similarities from the initial list of the top scored proposals of a given query frame, which ensures an improved performance for both detection and recognition simultaneously and reduced false alarms. The proposed search strategy offers an efficient feature extraction scheme that can capture the maximum amount of object structural information by defining a region- based deep shape descriptor representing each object of interest present in the scene. Therefore, our solution is capable of handling the fundamental incompleteness of the IR imageries for which the conventional deep features optimized on the natural color images in Imagenet are not quite suitable. Our preliminary experiments on the OSU weapon dataset demonstrates significant success in automated recognition of weapon discharge events from IR imagery.

  7. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  8. Joint Wavelet Video Denoising and Motion Activity Detection in Multimodal Human Activity Analysis: Application to Video-Assisted Bioacoustic/Psychophysiological Monitoring

    Science.gov (United States)

    Dimoulas, C. A.; Avdelidis, K. A.; Kalliris, G. M.; Papanikolaou, G. V.

    2007-12-01

    The current work focuses on the design and implementation of an indoor surveillance application for long-term automated analysis of human activity, in a video-assisted biomedical monitoring system. Video processing is necessary to overcome noise-related problems, caused by suboptimal video capturing conditions, due to poor lighting or even complete darkness during overnight recordings. Modified wavelet-domain spatiotemporal Wiener filtering and motion-detection algorithms are employed to facilitate video enhancement, motion-activity-based indexing and summarization. Structural aspects for validation of the motion detection results are also used. The proposed system has been already deployed in monitoring of long-term abdominal sounds, for surveillance automation, motion-artefacts detection and connection with other psychophysiological parameters. However, it can be used to any video-assisted biomedical monitoring or other surveillance application with similar demands.

  9. Event Coverage Detection and Event Source Determination in Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhangbing Zhou

    2015-12-01

    Full Text Available With the advent of the Internet of Underwater Things, smart things are deployed in the ocean space and establish underwater wireless sensor networks for the monitoring of vast and dynamic underwater environments. When events are found to have possibly occurred, accurate event coverage should be detected, and potential event sources should be determined for the enactment of prompt and proper responses. To address this challenge, a technique that detects event coverage and determines event sources is developed in this article. Specifically, the occurrence of possible events corresponds to a set of neighboring sensor nodes whose sensory data may deviate from a normal sensing range in a collective fashion. An appropriate sensor node is selected as the relay node for gathering and routing sensory data to sink node(s. When sensory data are collected at sink node(s, the event coverage is detected and represented as a weighted graph, where the vertices in this graph correspond to sensor nodes and the weight specified upon the edges reflects the extent of sensory data deviating from a normal sensing range. Event sources are determined, which correspond to the barycenters in this graph. The results of the experiments show that our technique is more energy efficient, especially when the network topology is relatively steady.

  10. Event Coverage Detection and Event Source Determination in Underwater Wireless Sensor Networks.

    Science.gov (United States)

    Zhou, Zhangbing; Xing, Riliang; Duan, Yucong; Zhu, Yueqin; Xiang, Jianming

    2015-12-15

    With the advent of the Internet of Underwater Things, smart things are deployed in the ocean space and establish underwater wireless sensor networks for the monitoring of vast and dynamic underwater environments. When events are found to have possibly occurred, accurate event coverage should be detected, and potential event sources should be determined for the enactment of prompt and proper responses. To address this challenge, a technique that detects event coverage and determines event sources is developed in this article. Specifically, the occurrence of possible events corresponds to a set of neighboring sensor nodes whose sensory data may deviate from a normal sensing range in a collective fashion. An appropriate sensor node is selected as the relay node for gathering and routing sensory data to sink node(s). When sensory data are collected at sink node(s), the event coverage is detected and represented as a weighted graph, where the vertices in this graph correspond to sensor nodes and the weight specified upon the edges reflects the extent of sensory data deviating from a normal sensing range. Event sources are determined, which correspond to the barycenters in this graph. The results of the experiments show that our technique is more energy efficient, especially when the network topology is relatively steady.

  11. Integrating pedestrian simulation, tracking and event detection for crowd analysis

    OpenAIRE

    Butenuth, Matthias; Burkert, Florian; Kneidl, Angelika; Borrmann, Andre; Schmidt, Florian; Hinz, Stefan; Sirmacek, Beril; Hartmann, Dirk

    2011-01-01

    In this paper, an overall framework for crowd analysis is presented. Detection and tracking of pedestrians as well as detection of dense crowds is performed on image sequences to improve simulation models of pedestrian flows. Additionally, graph-based event detection is performed by using Hidden Markov Models on pedestrian trajectories utilizing knowledge from simulations. Experimental results show the benefit of our integrated framework using simulation and real-world data for crowd anal...

  12. Do Instructional Videos on Sputum Submission Result in Increased Tuberculosis Case Detection? A Randomized Controlled Trial.

    Science.gov (United States)

    Mhalu, Grace; Hella, Jerry; Doulla, Basra; Mhimbira, Francis; Mtutu, Hawa; Hiza, Helen; Sasamalo, Mohamed; Rutaihwa, Liliana; Rieder, Hans L; Seimon, Tamsyn; Mutayoba, Beatrice; Weiss, Mitchell G; Fenner, Lukas

    2015-01-01

    We examined the effect of an instructional video about the production of diagnostic sputum on case detection of tuberculosis (TB), and evaluated the acceptance of the video. Randomized controlled trial. We prepared a culturally adapted instructional video for sputum submission. We analyzed 200 presumptive TB cases coughing for more than two weeks who attended the outpatient department of the governmental Municipal Hospital in Mwananyamala (Dar es Salaam, Tanzania). They were randomly assigned to either receive instructions on sputum submission using the video before submission (intervention group, n = 100) or standard of care (control group, n = 100). Sputum samples were examined for volume, quality and presence of acid-fast bacilli by experienced laboratory technicians blinded to study groups. Median age was 39.1 years (interquartile range 37.0-50.0); 94 (47%) were females, 106 (53%) were males, and 49 (24.5%) were HIV-infected. We found that the instructional video intervention was associated with detection of a higher proportion of microscopically confirmed cases (56%, 95% confidence interval [95% CI] 45.7-65.9%, sputum smear positive patients in the intervention group versus 23%, 95% CI 15.2-32.5%, in the control group, p instructions were understood, the majority of patients in the intervention group reported to have understood the video instructions well (97%). Most of the patients thought the video would be useful in the cultural setting of Tanzania (92%). Sputum submission instructional videos increased the yield of tuberculosis cases through better quality of sputum samples. If confirmed in larger studies, instructional videos may have a substantial effect on the case yield using sputum microscopy and also molecular tests. This low-cost strategy should be considered as part of the efforts to control TB in resource-limited settings. Pan African Clinical Trials Registry PACTR201504001098231.

  13. Detection capability of the Italian network for teleseismic events

    Directory of Open Access Journals (Sweden)

    A. Marchetti

    1994-06-01

    Full Text Available The future GSE experiment is based on a global seismic monitoring system, that should be designed for monitoring compliance with a nuclear test ban treaty. Every country participating in the test will transmit data to the International Data Center. Because of the high quality of data required, we decided to conduct this study in order to determine the set of stations to be used in the experiment. The Italian telemetered seismological network can detect all events of at least magnitude 2.5 whose epicenters are inside the network itself. For external events the situation is different: the capabilíty of detection is conditioned not only by the noise condition of the station, but also by the relative position of epicenter and station. The ING bulletin (January 1991-June 1992 was the data set for the present work. Comparing these data with the National Earthquake Information Center (NEIC bulletin, we established which stations are most reliable in detecting teleseismic events and, moreover, how distance and back-azimuth can influence event detection. Furthermore, we investigated the reliability of the automatic acquisition system in relation to teleseismic event detection.

  14. A Unified Framework for Tracking Based Text Detection and Recognition from Web Videos.

    Science.gov (United States)

    Tian, Shu; Yin, Xu-Cheng; Su, Ya; Hao, Hong-Wei

    2017-04-12

    Video text extraction plays an important role for multimedia understanding and retrieval. Most previous research efforts are conducted within individual frames. A few of recent methods, which pay attention to text tracking using multiple frames, however, do not effectively mine the relations among text detection, tracking and recognition. In this paper, we propose a generic Bayesian-based framework of Tracking based Text Detection And Recognition (T2DAR) from web videos for embedded captions, which is composed of three major components, i.e., text tracking, tracking based text detection, and tracking based text recognition. In this unified framework, text tracking is first conducted by tracking-by-detection. Tracking trajectories are then revised and refined with detection or recognition results. Text detection or recognition is finally improved with multi-frame integration. Moreover, a challenging video text (embedded caption text) database (USTB-VidTEXT) is constructed and publicly available. A variety of experiments on this dataset verify that our proposed approach largely improves the performance of text detection and recognition from web videos.

  15. Detection and localization of copy-paste forgeries in digital videos.

    Science.gov (United States)

    Singh, Raahat Devender; Aggarwal, Naveen

    2017-12-01

    Amidst the continual march of technology, we find ourselves relying on digital videos to proffer visual evidence in several highly sensitive areas such as journalism, politics, civil and criminal litigation, and military and intelligence operations. However, despite being an indispensable source of information with high evidentiary value, digital videos are also extremely vulnerable to conscious manipulations. Therefore, in a situation where dependence on video evidence is unavoidable, it becomes crucial to authenticate the contents of this evidence before accepting them as an accurate depiction of reality. Digital videos can suffer from several kinds of manipulations, but perhaps, one of the most consequential forgeries is copy-paste forgery, which involves insertion/removal of objects into/from video frames. Copy-paste forgeries alter the information presented by the video scene, which has a direct effect on our basic understanding of what that scene represents, and so, from a forensic standpoint, the challenge of detecting such forgeries is especially significant. In this paper, we propose a sensor pattern noise based copy-paste detection scheme, which is an improved and forensically stronger version of an existing noise-residue based technique. We also study a demosaicing artifact based image forensic scheme to estimate the extent of its viability in the domain of video forensics. Furthermore, we suggest a simplistic clustering technique for the detection of copy-paste forgeries, and determine if it possess the capabilities desired of a viable and efficacious video forensic scheme. Finally, we validate these schemes on a set of realistically tampered MJPEG, MPEG-2, MPEG-4, and H.264/AVC encoded videos in a diverse experimental set-up by varying the strength of post-production re-compressions and transcodings, bitrates, and sizes of the tampered regions. Such an experimental set-up is representative of a neutral testing platform and simulates a real

  16. Small Moving Vehicle Detection in a Satellite Video of an Urban Area

    Directory of Open Access Journals (Sweden)

    Tao Yang

    2016-09-01

    Full Text Available Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously.

  17. Artificial intelligence based event detection in wireless sensor networks

    OpenAIRE

    Bahrepour, M.

    2013-01-01

    Wireless sensor networks (WSNs) are composed of large number of small, inexpensive devices, called sensor nodes, which are equipped with sensing, processing, and communication capabilities. While traditional applications of wireless sensor networks focused on periodic monitoring, the focus of more recent applications is on fast and reliable identification of out-of-ordinary situations and events. This new functionality of wireless sensor networks is known as event detection. Due to the fact t...

  18. High contextual sensitivity of metaphorical expressions and gesture blending: A video event-related potential design.

    Science.gov (United States)

    Ibáñez, Agustín; Toro, Pablo; Cornejo, Carlos; Urquina, Hugo; Hurquina, Hugo; Manes, Facundo; Weisbrod, Matthias; Schröder, Johannes

    2011-01-30

    Human communication in a natural context implies the dynamic coordination of contextual clues, paralinguistic information and literal as well as figurative language use. In the present study we constructed a paradigm with four types of video clips: literal and metaphorical expressions accompanied by congruent and incongruent gesture actions. Participants were instructed to classify the gesture accompanying the expression as congruent or incongruent by pressing two different keys while electrophysiological activity was being recorded. We compared behavioral measures and event related potential (ERP) differences triggered by the gesture stroke onset. Accuracy data showed that incongruent metaphorical expressions were more difficult to classify. Reaction times were modulated by incongruent gestures, by metaphorical expressions and by a gesture-expression interaction. No behavioral differences were found between the literal and metaphorical expressions when the gesture was congruent. N400-like and LPC-like (late positive complex) components from metaphorical expressions produced greater negativity. The N400-like modulation of metaphorical expressions showed a greater difference between congruent and incongruent categories over the left anterior region, compared with the literal expressions. More importantly, the literal congruent as well as the metaphorical congruent categories did not show any difference. Accuracy, reaction times and ERPs provide convergent support for a greater contextual sensitivity of the metaphorical expressions. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  19. Deep Learning for Detection of Object-Based Forgery in Advanced Video

    Directory of Open Access Journals (Sweden)

    Ye Yao

    2017-12-01

    Full Text Available Passive video forensics has drawn much attention in recent years. However, research on detection of object-based forgery, especially for forged video encoded with advanced codec frameworks, is still a great challenge. In this paper, we propose a deep learning-based approach to detect object-based forgery in the advanced video. The presented deep learning approach utilizes a convolutional neural network (CNN to automatically extract high-dimension features from the input image patches. Different from the traditional CNN models used in computer vision domain, we let video frames go through three preprocessing layers before being fed into our CNN model. They include a frame absolute difference layer to cut down temporal redundancy between video frames, a max pooling layer to reduce computational complexity of image convolution, and a high-pass filter layer to enhance the residual signal left by video forgery. In addition, an asymmetric data augmentation strategy has been established to get a similar number of positive and negative image patches before the training. The experiments have demonstrated that the proposed CNN-based model with the preprocessing layers has achieved excellent results.

  20. People detection in nuclear plants by video processing for safety purpose

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A., E-mail: calexandre@ien.gov.b, E-mail: mol@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN), Rio de Janeiro, RJ (Brazil); Seixas, Jose M.; Silva, Eduardo Antonio B., E-mail: seixas@lps.ufrj.b, E-mail: eduardo@lps.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Eletrica; Cota, Raphael E.; Ramos, Bruno L., E-mail: brunolange@poli.ufrj.b [Universidade Federal do Rio de Janeiro (EP/UFRJ), RJ (Brazil). Dept. de Engenharia Eletronica e de Computacao

    2011-07-01

    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta's room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work. (author)

  1. Handbook of video databases design and applications

    CERN Document Server

    Furht, Borko

    2003-01-01

    INTRODUCTIONIntroduction to Video DatabasesOge Marques and Borko FurhtVIDEO MODELING AND REPRESENTATIONModeling Video Using Input/Output Markov Models with Application to Multi-Modal Event DetectionAshutosh Garg, Milind R. Naphade, and Thomas S. HuangStatistical Models of Video Structure and SemanticsNuno VasconcelosFlavor: A Language for Media RepresentationAlexandros Eleftheriadis and Danny HongIntegrating Domain Knowledge and Visual Evidence to Support Highlight Detection in Sports VideosJuergen Assfalg, Marco Bertini, Carlo Colombo, and Alberto Del BimboA Generic Event Model and Sports Vid

  2. Method for early detection of cooling-loss events

    Science.gov (United States)

    Bermudez, Sergio A.; Hamann, Hendrik; Marianno, Fernando J.

    2015-06-30

    A method of detecting cooling-loss event early is provided. The method includes defining a relative humidity limit and change threshold for a given space, measuring relative humidity in the given space, determining, with a processing unit, whether the measured relative humidity is within the defined relative humidity limit, generating a warning in an event the measured relative humidity is outside the defined relative humidity limit and determining whether a change in the measured relative humidity is less than the defined change threshold for the given space and generating an alarm in an event the change is greater than the defined change threshold.

  3. A video-based eye pupil detection system for diagnosing bipolar disorder

    OpenAIRE

    AKINCI, Gökay; Polat, Ediz; Koçak, Orhan Murat

    2012-01-01

    Eye pupil detection systems have become increasingly popular in image processing and computer vision applications in medical systems. In this study, a video-based eye pupil detection system is developed for diagnosing bipolar disorder. Bipolar disorder is a condition in which people experience changes in cognitive processes and abilities, including reduced attentional and executive capabilities and impaired memory. In order to detect these abnormal behaviors, a number of neuropsychologi...

  4. Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO

    Directory of Open Access Journals (Sweden)

    Lixin Yan

    2016-07-01

    Full Text Available The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1 the Markov blanket (MB algorithm is employed to extract the main factors associated with hazardous traffic events; (2 a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle’s speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G have significant influences on hazardous traffic events. The sequential minimal optimization (SMO algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles.

  5. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    DEFF Research Database (Denmark)

    Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen

    2003-01-01

    at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over...

  6. Efficiently detecting outlying behavior in video-game players

    National Research Council Canada - National Science Library

    Kim, Young Bin; Kang, Shin Jin; Lee, Sang Hyeok; Jung, Jang Young; Kam, Hyeong Ryeol; Lee, Jung; Kim, Young Sun; Lee, Joonsoo; Kim, Chang Hun

    2015-01-01

    In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise...

  7. Efficiently detecting outlying behavior in video-game players

    OpenAIRE

    Kim, Young Bin; Kang, Shin Jin; Lee, Sang Hyeok; Jung, Jang Young; Kam, Hyeong Ryeol; Lee, Jung; Kim, Young Sun; Lee, Joonsoo; Kim, Chang Hun

    2015-01-01

    In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players’ characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. M...

  8. Bayesian foreground and shadow detection in uncertain frame rate surveillance videos.

    Science.gov (United States)

    Benedek, C; Sziranyi, T

    2008-04-01

    In in this paper, we propose a new model regarding foreground and shadow detection in video sequences. The model works without detailed a priori object-shape information, and it is also appropriate for low and unstable frame rate video sources. Contribution is presented in three key issues: 1) we propose a novel adaptive shadow model, and show the improvements versus previous approaches in scenes with difficult lighting and coloring effects; 2) we give a novel description for the foreground based on spatial statistics of the neighboring pixel values, which enhances the detection of background or shadow-colored object parts; 3) we show how microstructure analysis can be used in the proposed framework as additional feature components improving the results. Finally, a Markov random field model is used to enhance the accuracy of the separation. We validate our method on outdoor and indoor sequences including real surveillance videos and well-known benchmark test sets.

  9. Real-time billboard trademark detection and recognition in sports video

    Science.gov (United States)

    Bu, Jiang; Lao, Song-Yan; Bai, Liang

    2013-03-01

    Nowadays, different applications like automatic video indexing, keyword based video search and TV commercials can be developed by detecting and recognizing the billboard trademark. We propose a hierarchical solution for real-time billboard trademark recognition in various sports video, billboard frames are detected in the first level, fuzzy decision tree with easily-computing features are employed to accelerate the process, while in the second level, color and regional SIFT features are combined for the first time to describe the appearance of trademarks, and the shared nearest neighbor (SNN) clustering with x2 distance is utilized instead of traditional K-means clustering to construct the SIFT vocabulary, at last, Latent Semantic Analysis (LSA) based SIFT vocabulary matching is performed on the template trademark and the candidate regions in billboard frame. The preliminary experiments demonstrate the effectiveness of the hierarchical solution, and real time constraints are also met by our solution.

  10. TNO at TRECVID 2013: Multimedia Event Detection and Instance Search

    NARCIS (Netherlands)

    Bouma, H.; Azzopardi, G.; Spitters, M.M.; Wit, J.J. de; Versloot, C.A.; Zon, R.W.L. van der; Eendebak, P.T.; Baan, J.; Hove, R.J.M. ten; Eekeren, A.W.M. van; Haar, F.B. ter; Hollander, R.J.M. den; Huis, R.J. van; Boer, M.H.T. de; Antwerpen, G. van; Broekhuijsen, B.J.; Daniele, L.M.; Brandt, P.; Schavemaker, J.G.M.; Kraaij, W.; Schutte, K.

    2013-01-01

    We describe the TNO system and the evaluation results for TRECVID 2013 Multimedia Event Detection (MED) and instance search (INS) tasks. The MED system consists of a bag-of-word (BOW) approach with spatial tiling that uses low-level static and dynamic visual features, an audio feature and high-level

  11. Distributed Event Detection in Wireless Sensor Networks for Disaster Management

    NARCIS (Netherlands)

    Bahrepour, M.; Meratnia, Nirvana; Poel, Mannes; Taghikhaki, Zahra; Havinga, Paul J.M.

    2010-01-01

    Recently, wireless sensor networks (WSNs) have become mature enough to go beyond being simple fine-grained continuous monitoring platforms and become one of the enabling technologies for disaster early-warning systems. Event detection functionality of WSNs can be of great help and importance for

  12. On Event Detection and Localization in Acyclic Flow Networks

    KAUST Repository

    Suresh, Mahima Agumbe

    2013-05-01

    Acyclic flow networks, present in many infrastructures of national importance (e.g., oil and gas and water distribution systems), have been attracting immense research interest. Existing solutions for detecting and locating attacks against these infrastructures have been proven costly and imprecise, particularly when dealing with large-scale distribution systems. In this article, to the best of our knowledge, for the first time, we investigate how mobile sensor networks can be used for optimal event detection and localization in acyclic flow networks. We propose the idea of using sensors that move along the edges of the network and detect events (i.e., attacks). To localize the events, sensors detect proximity to beacons, which are devices with known placement in the network. We formulate the problem of minimizing the cost of monitoring infrastructure (i.e., minimizing the number of sensors and beacons deployed) in a predetermined zone of interest, while ensuring a degree of coverage by sensors and a required accuracy in locating events using beacons. We propose algorithms for solving the aforementioned problem and demonstrate their effectiveness with results obtained from a realistic flow network simulator.

  13. Fusion of acoustic measurements with video surveillance for estuarine threat detection

    Science.gov (United States)

    Bunin, Barry; Sutin, Alexander; Kamberov, George; Roh, Heui-Seol; Luczynski, Bart; Burlick, Matt

    2008-04-01

    Stevens Institute of Technology has established a research laboratory environment in support of the U.S. Navy in the area of Anti-Terrorism and Force Protection. Called the Maritime Security Laboratory, or MSL, it provides the capabilities of experimental research to enable development of novel methods of threat detection in the realistic environment of the Hudson River Estuary. In MSL, this is done through a multi-modal interdisciplinary approach. In this paper, underwater acoustic measurements and video surveillance are combined. Stevens' researchers have developed a specialized prototype video system to identify, video-capture, and map surface ships in a sector of the estuary. The combination of acoustic noise with video data for different kinds of ships in Hudson River enabled estimation of sound attenuation in a wide frequency band. Also, it enabled the collection of a noise library of various ships that can be used for ship classification by passive acoustic methods. Acoustics and video can be used to determine a ship's position. This knowledge can be used for ship noise suppression in hydrophone arrays in underwater threat detection. Preliminary experimental results of position determination are presented in the paper.

  14. Facial Video based Detection of Physical Fatigue for Maximal Muscle Activity

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    Physical fatigue reveals the health condition of a person at for example health checkup, fitness assessment or rehabilitation training. This paper presents an efficient noncontact system for detecting non-localized physi-cal fatigue from maximal muscle activity using facial videos acquired...

  15. Detecting Road Users at Intersections Through Changing Weather Using RGB-Thermal Videos

    DEFF Research Database (Denmark)

    Bahnsen, Chris; Moeslund, Thomas B.

    2015-01-01

    This paper compares the performance of a watch-dog sys- tem that detects road user actions in urban intersections to a KLT- based tracking system used in traffic surveillance. The two approaches are evaluated on 16 hours of video data captured by RGB and ther- mal cameras under challenging light...

  16. Do Instructional Videos on Sputum Submission Result in Increased Tuberculosis Case Detection? A Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Grace Mhalu

    Full Text Available We examined the effect of an instructional video about the production of diagnostic sputum on case detection of tuberculosis (TB, and evaluated the acceptance of the video.Randomized controlled trial.We prepared a culturally adapted instructional video for sputum submission. We analyzed 200 presumptive TB cases coughing for more than two weeks who attended the outpatient department of the governmental Municipal Hospital in Mwananyamala (Dar es Salaam, Tanzania. They were randomly assigned to either receive instructions on sputum submission using the video before submission (intervention group, n = 100 or standard of care (control group, n = 100. Sputum samples were examined for volume, quality and presence of acid-fast bacilli by experienced laboratory technicians blinded to study groups.Median age was 39.1 years (interquartile range 37.0-50.0; 94 (47% were females, 106 (53% were males, and 49 (24.5% were HIV-infected. We found that the instructional video intervention was associated with detection of a higher proportion of microscopically confirmed cases (56%, 95% confidence interval [95% CI] 45.7-65.9%, sputum smear positive patients in the intervention group versus 23%, 95% CI 15.2-32.5%, in the control group, p <0.0001, an increase in volume of specimen defined as a volume ≥3ml (78%, 95% CI 68.6-85.7%, versus 45%, 95% CI 35.0-55.3%, p <0.0001, and specimens less likely to be salivary (14%, 95% CI 7.9-22.4%, versus 39%, 95% CI 29.4-49.3%, p = 0.0001. Older age, but not the HIV status or sex, modified the effectiveness of the intervention by improving it positively. When asked how well the video instructions were understood, the majority of patients in the intervention group reported to have understood the video instructions well (97%. Most of the patients thought the video would be useful in the cultural setting of Tanzania (92%.Sputum submission instructional videos increased the yield of tuberculosis cases through better quality of sputum

  17. Performance optimization for pedestrian detection on degraded video using natural scene statistics

    Science.gov (United States)

    Winterlich, Anthony; Denny, Patrick; Kilmartin, Liam; Glavin, Martin; Jones, Edward

    2014-11-01

    We evaluate the effects of transmission artifacts such as JPEG compression and additive white Gaussian noise on the performance of a state-of-the-art pedestrian detection algorithm, which is based on integral channel features. Integral channel features combine the diversity of information obtained from multiple image channels with the computational efficiency of the Viola and Jones detection framework. We utilize "quality aware" spatial image statistics to blindly categorize distorted video frames by distortion type and level without the use of an explicit reference. We combine quality statistics with a multiclassifier detection framework for optimal pedestrian detection performance across varying image quality. Our detection method provides statistically significant improvements over current approaches based on single classifiers, on two large pedestrian databases containing a wide variety of artificially added distortion. The improvement in detection performance is further demonstrated on real video data captured from multiple cameras containing varying levels of sensor noise and compression. The results of our research have the potential to be used in real-time in-vehicle networks to improve pedestrian detection performance across a wide range of image and video quality.

  18. Chemical Detection Architecture for a Subway System [video

    OpenAIRE

    Ignacio, Joselito; Center for Homeland Defense and Security Naval Postgraduate School

    2014-01-01

    This proposed system process aims to improve subway safety through better enabling the rapid detection and response to a chemical release in a subway system. The process is designed to be location-independent and generalized to most subway systems despite each system's unique characteristics.

  19. Efficiently detecting outlying behavior in video-game players

    Directory of Open Access Journals (Sweden)

    Young Bin Kim

    2015-12-01

    Full Text Available In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players’ characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players (i.e., data regarding adjustments to the volume and the use of the keyboard and mouse was used to analyze high-dimensional game-player data. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. The proposed method can also be used for feedback analysis of various interactive content provided in PC environments.

  20. Detecting rare gene transfer events in bacterial populations

    Directory of Open Access Journals (Sweden)

    Kaare Magne Nielsen

    2014-01-01

    Full Text Available Horizontal gene transfer (HGT enables bacteria to access, share, and recombine genetic variation, resulting in genetic diversity that cannot be obtained through mutational processes alone. In most cases, the observation of evolutionary successful HGT events relies on the outcome of initially rare events that lead to novel functions in the new host, and that exhibit a positive effect on host fitness. Conversely, the large majority of HGT events occurring in bacterial populations will go undetected due to lack of replication success of transformants. Moreover, other HGT events that would be highly beneficial to new hosts can fail to ensue due to lack of physical proximity to the donor organism, lack of a suitable gene transfer mechanism, genetic compatibility, and stochasticity in tempo-spatial occurrence. Experimental attempts to detect HGT events in bacterial populations have typically focused on the transformed cells or their immediate offspring. However, rare HGT events occurring in large and structured populations are unlikely to reach relative population sizes that will allow their immediate identification; the exception being the unusually strong positive selection conferred by antibiotics. Most HGT events are not expected to alter the likelihood of host survival to such an extreme extent, and will confer only minor changes in host fitness. Due to the large population sizes of bacteria and the time scales involved, the process and outcome of HGT are often not amenable to experimental investigation. Population genetic modeling of the growth dynamics of bacteria with differing HGT rates and resulting fitness changes is therefore necessary to guide sampling design and predict realistic time frames for detection of HGT, as it occurs in laboratory or natural settings. Here we review the key population genetic parameters, consider their complexity and highlight knowledge gaps for further research.

  1. Detection and interpretation of seismoacoustic events at German infrasound stations

    Science.gov (United States)

    Pilger, Christoph; Koch, Karl; Ceranna, Lars

    2016-04-01

    Three infrasound arrays with collocated or nearby installed seismometers are operated by the Federal Institute for Geosciences and Natural Resources (BGR) as the German National Data Center (NDC) for the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Infrasound generated by seismoacoustic events is routinely detected at these infrasound arrays, but air-to-ground coupled acoustic waves occasionally show up in seismometer recordings as well. Different natural and artificial sources like meteoroids as well as industrial and mining activity generate infrasonic signatures that are simultaneously detected at microbarometers and seismometers. Furthermore, many near-surface sources like earthquakes and explosions generate both seismic and infrasonic waves that can be detected successively with both technologies. The combined interpretation of seismic and acoustic signatures provides additional information about the origin time and location of remote infrasound events or about the characterization of seismic events distinguishing man-made and natural origins. Furthermore, seismoacoustic studies help to improve the modelling of infrasound propagation and ducting in the atmosphere and allow quantifying the portion of energy coupled into ground and into air by seismoacoustic sources. An overview of different seismoacoustic sources and their detection by German infrasound stations as well as some conclusions on the benefit of a combined seismoacoustic analysis are presented within this study.

  2. The research of moving objects behavior detection and tracking algorithm in aerial video

    Science.gov (United States)

    Yang, Le-le; Li, Xin; Yang, Xiao-ping; Li, Dong-hui

    2015-12-01

    The article focuses on the research of moving target detection and tracking algorithm in Aerial monitoring. Study includes moving target detection, moving target behavioral analysis and Target Auto tracking. In moving target detection, the paper considering the characteristics of background subtraction and frame difference method, using background reconstruction method to accurately locate moving targets; in the analysis of the behavior of the moving object, using matlab technique shown in the binary image detection area, analyzing whether the moving objects invasion and invasion direction; In Auto Tracking moving target, A video tracking algorithm that used the prediction of object centroids based on Kalman filtering was proposed.

  3. Αutomated 2D shoreline detection from coastal video imagery: an example from the island of Crete

    Science.gov (United States)

    Velegrakis, A. F.; Trygonis, V.; Vousdoukas, M. I.; Ghionis, G.; Chatzipavlis, A.; Andreadis, O.; Psarros, F.; Hasiotis, Th.

    2015-06-01

    Beaches are both sensitive and critical coastal system components as they: (i) are vulnerable to coastal erosion (due to e.g. wave regime changes and the short- and long-term sea level rise) and (ii) form valuable ecosystems and economic resources. In order to identify/understand the current and future beach morphodynamics, effective monitoring of the beach spatial characteristics (e.g. the shoreline position) at adequate spatio-temporal resolutions is required. In this contribution we present the results of a new, fully-automated detection method of the (2-D) shoreline positions using high resolution video imaging from a Greek island beach (Ammoudara, Crete). A fully-automated feature detection method was developed/used to monitor the shoreline position in geo-rectified coastal imagery obtained through a video system set to collect 10 min videos every daylight hour with a sampling rate of 5 Hz, from which snapshot, time-averaged (TIMEX) and variance images (SIGMA) were generated. The developed coastal feature detector is based on a very fast algorithm using a localised kernel that progressively grows along the SIGMA or TIMEX digital image, following the maximum backscatter intensity along the feature of interest; the detector results were found to compare very well with those obtained from a semi-automated `manual' shoreline detection procedure. The automated procedure was tested on video imagery obtained from the eastern part of Ammoudara beach in two 5-day periods, a low wave energy period (6-10 April 2014) and a high wave energy period (1 -5 November 2014). The results showed that, during the high wave energy event, there have been much higher levels of shoreline variance which, however, appeared to be similarly unevenly distributed along the shoreline as that related to the low wave energy event, Shoreline variance `hot spots' were found to be related to the presence/architecture of an offshore submerged shallow beachrock reef, found at a distance of 50-80 m

  4. Gait event detection during stair walking using a rate gyroscope.

    Science.gov (United States)

    Formento, Paola Catalfamo; Acevedo, Ruben; Ghoussayni, Salim; Ewins, David

    2014-03-19

    Gyroscopes have been proposed as sensors for ambulatory gait analysis and functional electrical stimulation systems. These applications often require detection of the initial contact (IC) of the foot with the floor and/or final contact or foot off (FO) from the floor during outdoor walking. Previous investigations have reported the use of a single gyroscope placed on the shank for detection of IC and FO on level ground and incline walking. This paper describes the evaluation of a gyroscope placed on the shank for determination of IC and FO in subjects ascending and descending a set of stairs. Performance was compared with a reference pressure measurement system. The absolute mean difference between the gyroscope and the reference was less than 45 ms for IC and better than 135 ms for FO for both activities. Detection success was over 93%. These results provide preliminary evidence supporting the use of a gyroscope for gait event detection when walking up and down stairs.

  5. Modeling Patterns of Activity and Detecting Abnormal Events with Low-Level Co-occurrences

    Science.gov (United States)

    Benezeth, Yannick; Jodoin, Pierre-Marc; Saligrama, Venkatesh

    We explore in this chapter a location-based approach for behavior modeling and abnormality detection. In contrast to conventional object-based approaches for which objects are identified, classified, and tracked to locate objects with suspicious behavior, we proceed directly with event characterization and behavior modeling using low-level features. Our approach consists of two-phases. In the first phase, co-occurrence of activity between temporal sequences of motion labels are used to build a statistical model for normal behavior. This model of co-occurrence statistics is embedded within a co-occurrence matrix which accounts for spatio-temporal co-occurrence of activity. In the second phase, the co-occurrence matrix is used as a potential function in a Markov-Random Field framework to describe, as the video streams in, the probability of observing new volumes of activity. The co-occurrence matrix is thus used for detecting moving objects whose behavior differs from the ones observed during the training phase. Interestingly, the Markov-Random Field distribution implicitly accounts for speed, direction, as well as the average size of the objects without any higher-level intervention. Furthermore, when the spatio-temporal volume is large enough, the co-occurrence distribution contains the average normal path followed by moving objects. Our method has been tested on various outdoor videos representing various challenges.

  6. PMU Data Event Detection: A User Guide for Power Engineers

    Energy Technology Data Exchange (ETDEWEB)

    Allen, A.; Singh, M.; Muljadi, E.; Santoso, S.

    2014-10-01

    This user guide is intended to accompany a software package containing a Matrix Laboratory (MATLAB) script and related functions for processing phasor measurement unit (PMU) data. This package and guide have been developed by the National Renewable Energy Laboratory and the University of Texas at Austin. The objective of this data processing exercise is to discover events in the vast quantities of data collected by PMUs. This document attempts to cover some of the theory behind processing the data to isolate events as well as the functioning of the MATLAB scripts. The report describes (1) the algorithms and mathematical background that the accompanying MATLAB codes use to detect events in PMU data and (2) the inputs required from the user and the outputs generated by the scripts.

  7. Sound Event Detection for Music Signals Using Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Pablo A. Alvarado-Durán

    2013-11-01

    Full Text Available In this paper we present a new methodology for detecting sound events in music signals using Gaussian Processes. Our method firstly takes a time-frequency representation, i.e. the spectrogram, of the input audio signal. Secondly the spectrogram dimension is reduced translating the linear Hertz frequency scale into the logarithmic Mel frequency scale using a triangular filter bank. Finally every short-time spectrum, i.e. every Mel spectrogram column, is classified as “Event” or “Not Event” by a Gaussian Processes Classifier. We compare our method with other event detection techniques widely used. To do so, we use MATLAB® to program each technique and test them using two datasets of music with different levels of complexity. Results show that the new methodology outperforms the standard approaches, getting an improvement by about 1.66 % on the dataset one and 0.45 % on the dataset two in terms of F-measure.

  8. Detecting modification of biomedical events using a deep parsing approach.

    Science.gov (United States)

    Mackinlay, Andrew; Martinez, David; Baldwin, Timothy

    2012-04-30

    This work describes a system for identifying event mentions in bio-molecular research abstracts that are either speculative (e.g. analysis of IkappaBalpha phosphorylation, where it is not specified whether phosphorylation did or did not occur) or negated (e.g. inhibition of IkappaBalpha phosphorylation, where phosphorylation did not occur). The data comes from a standard dataset created for the BioNLP 2009 Shared Task. The system uses a machine-learning approach, where the features used for classification are a combination of shallow features derived from the words of the sentences and more complex features based on the semantic outputs produced by a deep parser. To detect event modification, we use a Maximum Entropy learner with features extracted from the data relative to the trigger words of the events. The shallow features are bag-of-words features based on a small sliding context window of 3-4 tokens on either side of the trigger word. The deep parser features are derived from parses produced by the English Resource Grammar and the RASP parser. The outputs of these parsers are converted into the Minimal Recursion Semantics formalism, and from this, we extract features motivated by linguistics and the data itself. All of these features are combined to create training or test data for the machine learning algorithm. Over the test data, our methods produce approximately a 4% absolute increase in F-score for detection of event modification compared to a baseline based only on the shallow bag-of-words features. Our results indicate that grammar-based techniques can enhance the accuracy of methods for detecting event modification.

  9. Detecting modification of biomedical events using a deep parsing approach

    Directory of Open Access Journals (Sweden)

    MacKinlay Andrew

    2012-04-01

    Full Text Available Abstract Background This work describes a system for identifying event mentions in bio-molecular research abstracts that are either speculative (e.g. analysis of IkappaBalpha phosphorylation, where it is not specified whether phosphorylation did or did not occur or negated (e.g. inhibition of IkappaBalpha phosphorylation, where phosphorylation did not occur. The data comes from a standard dataset created for the BioNLP 2009 Shared Task. The system uses a machine-learning approach, where the features used for classification are a combination of shallow features derived from the words of the sentences and more complex features based on the semantic outputs produced by a deep parser. Method To detect event modification, we use a Maximum Entropy learner with features extracted from the data relative to the trigger words of the events. The shallow features are bag-of-words features based on a small sliding context window of 3-4 tokens on either side of the trigger word. The deep parser features are derived from parses produced by the English Resource Grammar and the RASP parser. The outputs of these parsers are converted into the Minimal Recursion Semantics formalism, and from this, we extract features motivated by linguistics and the data itself. All of these features are combined to create training or test data for the machine learning algorithm. Results Over the test data, our methods produce approximately a 4% absolute increase in F-score for detection of event modification compared to a baseline based only on the shallow bag-of-words features. Conclusions Our results indicate that grammar-based techniques can enhance the accuracy of methods for detecting event modification.

  10. Malware in the Future? Forecasting Analyst Detection of Cyber Events

    OpenAIRE

    Bakdash, Jonathan Z.; Hutchinson, Steve; Zaroukian, Erin G.; Marusich, Laura R.; Thirumuruganathan, Saravanan; Sample, Charmaine; Hoffman, Blaine; Das, Gautam

    2017-01-01

    Cyber attacks endanger physical, economic, social, and political security. We use a Bayesian state space model to forecast the number of future cyber attacks. Cyber attacks were defined as malware detected by cyber analysts over seven years using cyber events (i.e., reports of malware attacks supported by evidence) at a large Computer Security Service Provider (CSSP). This CSSP protects a variety of computers and networks, which are critical infrastructure, for the U.S. Department of Defense ...

  11. Ethical use of covert videoing techniques in detecting Munchausen syndrome by proxy.

    Science.gov (United States)

    Foreman, D M; Farsides, C

    1993-01-01

    Munchausen syndrome by proxy is an especially malignant form of child abuse in which the carer (usually the mother) fabricates or exacerbates illness in the child to obtain medical attention. It can result in serious illness and even death of the child and it is difficult to detect. Some investigators have used video to monitor the carer's interaction with the child without obtaining consent--covert videoing. The technique presents several ethical problems, including exposure of the child to further abuse and a breach of trust between carer, child, and the professionals. Although covert videoing can be justified in restricted circumstances, new abuse procedures under the Children Act now seem to make its use unethical in most cases. Sufficient evidence should mostly be obtained from separation of the child and carer or videoing with consent to enable action to be taken to protect the child under an assessment order. If the new statutory instruments prove ineffective in Munchausen syndrome by proxy covert videoing may need to be re-evaluated. PMID:8401021

  12. Microseismic Events Detection on Xishancun Landslide, Sichuan Province, China

    Science.gov (United States)

    Sheng, M.; Chu, R.; Wei, Z.

    2016-12-01

    On landslide, the slope movement and the fracturing of the rock mass often lead to microearthquakes, which are recorded as weak signals on seismographs. The distribution characteristics of temporal and spatial regional unstability as well as the impact of external factors on the unstable regions can be understand and analyzed by monitoring those microseismic events. Microseismic method can provide some information inside the landslide, which can be used as supplementary of geodetic methods for monitoring the movement of landslide surface. Compared to drilling on landslide, microseismic method is more economical and safe. Xishancun Landslide is located about 60km northwest of Wenchuan earthquake centroid, it keep deforming after the earthquake, which greatly increases the probability of disasters. In the autumn of 2015, 30 seismometers were deployed on the landslide for 3 months with intervals of 200 500 meters. First, we used regional earthquakes for time correction of seismometers to eliminate the influence of inaccuracy GPS clocks and the subsurface structure of stations. Due to low velocity of the loose medium, the travel time difference of microseismic events on the landslide up to 5s. According to travel time and waveform characteristics, we found many microseismic events and converted them into envelopes as templates, then we used a sliding-window cross-correlation technique based on waveform envelope to detect the other microseismic events. Consequently, 100 microseismic events were detected with the waveforms recorded on all seismometers. Based on the location, we found most of them located on the front of the landslide while the others located on the back end. The bottom and top of the landslide accumulated considerable energy and deformed largely, radiated waves could be recorded by all stations. What's more, the bottom with more events seemed very active. In addition, there were many smaller events happened in middle part of the landslide where released

  13. DeTeCt 3.0: A software tool to detect impacts of small objects in video observations of Jupiter obtained by amateur astronomers

    Science.gov (United States)

    Juaristi, J.; Delcroix, M.; Hueso, R.; Sánchez-Lavega, A.

    2017-09-01

    Impacts of small size objects (10-20 m in diameter) with Jupiter atmosphere result in luminous superbolides that can be observed from the Earth with small size telescopes. Impacts of this kind have been observed four times by amateur astronomers since July 2010. The probability of observing one of these events is very small. Amateur astronomers observe Jupiter using fast video cameras that record thousands of frames during a few minutes which combine into a single image that generally results in a high-resolution image. Flashes are brief, faint and often lost by image reconstruction software. We present major upgrades in a software tool DeTeCt initially developed by amateur astronomer Marc Delcroix and our current project to maximize the chances of detecting more of these impacts in Jupiter.

  14. Integrating Online and Offline Three-Dimensional Deep Learning for Automated Polyp Detection in Colonoscopy Videos.

    Science.gov (United States)

    Lequan Yu; Hao Chen; Qi Dou; Jing Qin; Pheng Ann Heng

    2017-01-01

    Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer prevention and diagnosis. Traditional manual screening is time consuming, operator dependent, and error prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intraclass variations in polyp size, color, shape, and texture, and low interclass variations between polyps and hard mimics. In this paper, we propose a novel offline and online three-dimensional (3-D) deep learning integration framework by leveraging the 3-D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with the previous methods employing hand-crafted features or 2-D convolutional neural network, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.

  15. Towards Optimal Event Detection and Localization in Acyclic Flow Networks

    KAUST Repository

    Agumbe Suresh, Mahima

    2012-01-03

    Acyclic flow networks, present in many infrastructures of national importance (e.g., oil & gas and water distribution systems), have been attracting immense research interest. Existing solutions for detecting and locating attacks against these infrastructures, have been proven costly and imprecise, especially when dealing with large scale distribution systems. In this paper, to the best of our knowledge for the first time, we investigate how mobile sensor networks can be used for optimal event detection and localization in acyclic flow networks. Sensor nodes move along the edges of the network and detect events (i.e., attacks) and proximity to beacon nodes with known placement in the network. We formulate the problem of minimizing the cost of monitoring infrastructure (i.e., minimizing the number of sensor and beacon nodes deployed), while ensuring a degree of sensing coverage in a zone of interest and a required accuracy in locating events. We propose algorithms for solving these problems and demonstrate their effectiveness with results obtained from a high fidelity simulator.

  16. Statistical language analysis for automatic exfiltration event detection.

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, David Gerald

    2010-04-01

    This paper discusses the recent development a statistical approach for the automatic identification of anomalous network activity that is characteristic of exfiltration events. This approach is based on the language processing method eferred to as latent dirichlet allocation (LDA). Cyber security experts currently depend heavily on a rule-based framework for initial detection of suspect network events. The application of the rule set typically results in an extensive list of uspect network events that are then further explored manually for suspicious activity. The ability to identify anomalous network events is heavily dependent on the experience of the security personnel wading through the network log. Limitations f this approach are clear: rule-based systems only apply to exfiltration behavior that has previously been observed, and experienced cyber security personnel are rare commodities. Since the new methodology is not a discrete rule-based pproach, it is more difficult for an insider to disguise the exfiltration events. A further benefit is that the methodology provides a risk-based approach that can be implemented in a continuous, dynamic or evolutionary fashion. This permits uspect network activity to be identified early with a quantifiable risk associated with decision making when responding to suspicious activity.

  17. Detection of Epileptic Seizure Event and Onset Using EEG

    Science.gov (United States)

    Ahammad, Nabeel; Fathima, Thasneem; Joseph, Paul

    2014-01-01

    This study proposes a method of automatic detection of epileptic seizure event and onset using wavelet based features and certain statistical features without wavelet decomposition. Normal and epileptic EEG signals were classified using linear classifier. For seizure event detection, Bonn University EEG database has been used. Three types of EEG signals (EEG signal recorded from healthy volunteer with eye open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. Important features such as energy, entropy, standard deviation, maximum, minimum, and mean at different subbands were computed and classification was done using linear classifier. The performance of classifier was determined in terms of specificity, sensitivity, and accuracy. The overall accuracy was 84.2%. In the case of seizure onset detection, the database used is CHB-MIT scalp EEG database. Along with wavelet based features, interquartile range (IQR) and mean absolute deviation (MAD) without wavelet decomposition were extracted. Latency was used to study the performance of seizure onset detection. Classifier gave a sensitivity of 98.5% with an average latency of 1.76 seconds. PMID:24616892

  18. Detection of Epileptic Seizure Event and Onset Using EEG

    Directory of Open Access Journals (Sweden)

    Nabeel Ahammad

    2014-01-01

    Full Text Available This study proposes a method of automatic detection of epileptic seizure event and onset using wavelet based features and certain statistical features without wavelet decomposition. Normal and epileptic EEG signals were classified using linear classifier. For seizure event detection, Bonn University EEG database has been used. Three types of EEG signals (EEG signal recorded from healthy volunteer with eye open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures were classified. Important features such as energy, entropy, standard deviation, maximum, minimum, and mean at different subbands were computed and classification was done using linear classifier. The performance of classifier was determined in terms of specificity, sensitivity, and accuracy. The overall accuracy was 84.2%. In the case of seizure onset detection, the database used is CHB-MIT scalp EEG database. Along with wavelet based features, interquartile range (IQR and mean absolute deviation (MAD without wavelet decomposition were extracted. Latency was used to study the performance of seizure onset detection. Classifier gave a sensitivity of 98.5% with an average latency of 1.76 seconds.

  19. MUTUAL COMPARATIVE FILTERING FOR CHANGE DETECTION IN VIDEOS WITH UNSTABLE ILLUMINATION CONDITIONS

    Directory of Open Access Journals (Sweden)

    S. V. Sidyakin

    2016-06-01

    Full Text Available In this paper we propose a new approach for change detection and moving objects detection in videos with unstable, abrupt illumination changes. This approach is based on mutual comparative filters and background normalization. We give the definitions of mutual comparative filters and outline their strong advantage for change detection purposes. Presented approach allows us to deal with changing illumination conditions in a simple and efficient way and does not have drawbacks, which exist in models that assume different color transformation laws. The proposed procedure can be used to improve a number of background modelling methods, which are not specifically designed to work under illumination changes.

  20. Unsupervised behaviour-specific dictionary learning for abnormal event detection

    DEFF Research Database (Denmark)

    Ren, Huamin; Liu, Weifeng; Olsen, Søren Ingvor

    2015-01-01

    . Despite progress in this area, the relationship of atoms within the dictionary is commonly neglected, thereafter anomalies which are detected based on reconstruction error could brings high false alarm - noise or infrequent normal visual features could be wrongly detected as anomalies, especially when...... the training data is only a small proportion of the surveillance data. Therefore, we propose behavior-specific dictionaries (BSD) through unsupervised learning, pursuing atoms from the same type of behavior to represent one behavior dictionary. To further improve the dictionary by introducing information from...... potential infrequent normal patterns, we refine the dictionary by searching ‘missed atoms’ that have compact coefficients. Experimental results show that our BSD algorithm outperforms state-of-the-art dictionaries in abnormal event detection on the public UCSD dataset. Moreover, BSD has less false alarms...

  1. Pedestrian detection in video surveillance using fully convolutional YOLO neural network

    Science.gov (United States)

    Molchanov, V. V.; Vishnyakov, B. V.; Vizilter, Y. V.; Vishnyakova, O. V.; Knyaz, V. A.

    2017-06-01

    More than 80% of video surveillance systems are used for monitoring people. Old human detection algorithms, based on background and foreground modelling, could not even deal with a group of people, to say nothing of a crowd. Recent robust and highly effective pedestrian detection algorithms are a new milestone of video surveillance systems. Based on modern approaches in deep learning, these algorithms produce very discriminative features that can be used for getting robust inference in real visual scenes. They deal with such tasks as distinguishing different persons in a group, overcome problem with sufficient enclosures of human bodies by the foreground, detect various poses of people. In our work we use a new approach which enables to combine detection and classification tasks into one challenge using convolution neural networks. As a start point we choose YOLO CNN, whose authors propose a very efficient way of combining mentioned above tasks by learning a single neural network. This approach showed competitive results with state-of-the-art models such as FAST R-CNN, significantly overcoming them in speed, which allows us to apply it in real time video surveillance and other video monitoring systems. Despite all advantages it suffers from some known drawbacks, related to the fully-connected layers that obstruct applying the CNN to images with different resolution. Also it limits the ability to distinguish small close human figures in groups which is crucial for our tasks since we work with rather low quality images which often include dense small groups of people. In this work we gradually change network architecture to overcome mentioned above problems, train it on a complex pedestrian dataset and finally get the CNN detecting small pedestrians in real scenes.

  2. A clinically viable capsule endoscopy video analysis platform for automatic bleeding detection

    Science.gov (United States)

    Yi, Steven; Jiao, Heng; Xie, Jean; Mui, Peter; Leighton, Jonathan A.; Pasha, Shabana; Rentz, Lauri; Abedi, Mahmood

    2013-02-01

    In this paper, we present a novel and clinically valuable software platform for automatic bleeding detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos for GI tract run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. As a result, the process is time consuming and is prone to disease miss-finding. While researchers have made efforts to automate this process, however, no clinically acceptable software is available on the marketplace today. Working with our collaborators, we have developed a clinically viable software platform called GISentinel for fully automated GI tract bleeding detection and classification. Major functional modules of the SW include: the innovative graph based NCut segmentation algorithm, the unique feature selection and validation method (e.g. illumination invariant features, color independent features, and symmetrical texture features), and the cascade SVM classification for handling various GI tract scenes (e.g. normal tissue, food particles, bubbles, fluid, and specular reflection). Initial evaluation results on the SW have shown zero bleeding instance miss-finding rate and 4.03% false alarm rate. This work is part of our innovative 2D/3D based GI tract disease detection software platform. While the overall SW framework is designed for intelligent finding and classification of major GI tract diseases such as bleeding, ulcer, and polyp from the CE videos, this paper will focus on the automatic bleeding detection functional module.

  3. Event Detection Intelligent Camera: Demonstration of flexible, real-time data taking and processing

    Energy Technology Data Exchange (ETDEWEB)

    Szabolics, Tamás, E-mail: szabolics.tamas@wigner.mta.hu; Cseh, Gábor; Kocsis, Gábor; Szepesi, Tamás; Zoletnik, Sándor

    2015-10-15

    Highlights: • We present EDICAM's operation principles description. • Firmware tests results. • Software test results. • Further developments. - Abstract: An innovative fast camera (EDICAM – Event Detection Intelligent CAMera) was developed by MTA Wigner RCP in the last few years. This new concept was designed for intelligent event driven processing to be able to detect predefined events and track objects in the plasma. The camera provides a moderate frame rate of 400 Hz at full frame resolution (1280 × 1024), and readout of smaller region of interests can be done in the 1–140 kHz range even during exposure of the full image. One of the most important advantages of this hardware is a 10 Gbit/s optical link which ensures very fast communication and data transfer between the PC and the camera, enabling two level of processing: primitive algorithms in the camera hardware and high-level processing in the PC. This camera hardware has successfully proven to be able to monitoring the plasma in several fusion devices for example at ASDEX Upgrade, KSTAR and COMPASS with the first version of firmware. A new firmware and software package is under development. It allows to detect predefined events in real time and therefore the camera is capable to change its own operation or to give warnings e.g. to the safety system of the experiment. The EDICAM system can handle a huge amount of data (up to TBs) with high data rate (950 MB/s) and will be used as the central element of the 10 camera overview video diagnostic system of Wendenstein 7-X (W7-X) stellarator. This paper presents key elements of the newly developed built-in intelligence stressing the revolutionary new features and the results of the test of the different software elements.

  4. Towards a Video Passive Content Fingerprinting Method for Partial-Copy Detection Robust against Non-Simulated Attacks.

    Directory of Open Access Journals (Sweden)

    Zobeida Jezabel Guzman-Zavaleta

    Full Text Available Passive content fingerprinting is widely used for video content identification and monitoring. However, many challenges remain unsolved especially for partial-copies detection. The main challenge is to find the right balance between the computational cost of fingerprint extraction and fingerprint dimension, without compromising detection performance against various attacks (robustness. Fast video detection performance is desirable in several modern applications, for instance, in those where video detection involves the use of large video databases or in applications requiring real-time video detection of partial copies, a process whose difficulty increases when videos suffer severe transformations. In this context, conventional fingerprinting methods are not fully suitable to cope with the attacks and transformations mentioned before, either because the robustness of these methods is not enough or because their execution time is very high, where the time bottleneck is commonly found in the fingerprint extraction and matching operations. Motivated by these issues, in this work we propose a content fingerprinting method based on the extraction of a set of independent binary global and local fingerprints. Although these features are robust against common video transformations, their combination is more discriminant against severe video transformations such as signal processing attacks, geometric transformations and temporal and spatial desynchronization. Additionally, we use an efficient multilevel filtering system accelerating the processes of fingerprint extraction and matching. This multilevel filtering system helps to rapidly identify potential similar video copies upon which the fingerprint process is carried out only, thus saving computational time. We tested with datasets of real copied videos, and the results show how our method outperforms state-of-the-art methods regarding detection scores. Furthermore, the granularity of our method makes

  5. Machine learning for the automatic detection of anomalous events

    Science.gov (United States)

    Fisher, Wendy D.

    In this dissertation, we describe our research contributions for a novel approach to the application of machine learning for the automatic detection of anomalous events. We work in two different domains to ensure a robust data-driven workflow that could be generalized for monitoring other systems. Specifically, in our first domain, we begin with the identification of internal erosion events in earth dams and levees (EDLs) using geophysical data collected from sensors located on the surface of the levee. As EDLs across the globe reach the end of their design lives, effectively monitoring their structural integrity is of critical importance. The second domain of interest is related to mobile telecommunications, where we investigate a system for automatically detecting non-commercial base station routers (BSRs) operating in protected frequency space. The presence of non-commercial BSRs can disrupt the connectivity of end users, cause service issues for the commercial providers, and introduce significant security concerns. We provide our motivation, experimentation, and results from investigating a generalized novel data-driven workflow using several machine learning techniques. In Chapter 2, we present results from our performance study that uses popular unsupervised clustering algorithms to gain insights to our real-world problems, and evaluate our results using internal and external validation techniques. Using EDL passive seismic data from an experimental laboratory earth embankment, results consistently show a clear separation of events from non-events in four of the five clustering algorithms applied. Chapter 3 uses a multivariate Gaussian machine learning model to identify anomalies in our experimental data sets. For the EDL work, we used experimental data from two different laboratory earth embankments. Additionally, we explore five wavelet transform methods for signal denoising. The best performance is achieved with the Haar wavelets. We achieve up to 97

  6. A search engine for retrieval and inspection of events with 48 human actions in realistic videos

    NARCIS (Netherlands)

    Burghouts, G.J.; Penning, H.L.H. de; Hove, R.J.M. ten; Landsmeer, S.; Broek, S.P. van den; Hollander, R.J.M.; Hanckmann, P.; Kruithof, M.C.; Leeuwen, C.J. van; Korzec, S.; Bouma, H.; Schutte, K.

    2013-01-01

    The contribution of this paper is a search engine that recognizes and describes 48 human actions in realistic videos. The core algorithms have been published recently, from the early visual processing (Bouma, 2012), discriminative recognition (Burghouts, 2012) and textual description (Hankmann,

  7. Detection and objects tracking present in 2D digital video with Matlab

    Directory of Open Access Journals (Sweden)

    Melvin Ramírez Bogantes

    2013-09-01

    Full Text Available This paper presents the main results of research obtained in the design of an algorithm to detect and track an object in a video recording. The algorithm was designed in MatLab software and the videos used, which  presence of the mite Varroa Destructor in the cells of Africanized Honey Bees, were provided by the Centro de Investigación Apícola Tropical (CINAT-UNA.  The main result is the creation of a program capable of detecting and recording the movement of the mite, this is something innovative and useful for studies of the behavior of this species in the cells of honey bees performing the CINAT.

  8. Endmember detection in marine environment with oil spill event

    Science.gov (United States)

    Andreou, Charoula; Karathanassi, Vassilia

    2011-11-01

    Oil spill events are a crucial environmental issue. Detection of oil spills is important for both oil exploration and environmental protection. In this paper, investigation of hyperspectral remote sensing is performed for the detection of oil spills and the discrimination of different oil types. Spectral signatures of different oil types are very useful, since they may serve as endmembers in unmixing and classification models. Towards this direction, an oil spectral library, resulting from spectral measurements of artificial oil spills as well as of look-alikes in marine environment was compiled. Samples of four different oil types were used; two crude oils, one marine residual fuel oil, and one light petroleum product. Lookalikes comprise sea water, river discharges, shallow water and water with algae. Spectral measurements were acquired with spectro-radiometer GER1500. Moreover, oil and look-alikes spectral signatures have been examined whether they can be served as endmembers. This was accomplished by testifying their linear independence. After that, synthetic hyperspectral images based on the relevant oil spectral library were created. Several simplex-based endmember algorithms such as sequential maximum angle convex cone (SMACC), vertex component analysis (VCA), n-finder algorithm (N-FINDR), and automatic target generation process (ATGP) were applied on the synthetic images in order to evaluate their effectiveness for detecting oil spill events occurred from different oil types. Results showed that different types of oil spills with various thicknesses can be extracted as endmembers.

  9. Detection and Localization of Anomalous Motion in Video Sequences from Local Histograms of Labeled Affine Flows

    Directory of Open Access Journals (Sweden)

    Juan-Manuel Pérez-Rúa

    2017-05-01

    Full Text Available We propose an original method for detecting and localizing anomalous motion patterns in videos from a camera view-based motion representation perspective. Anomalous motion should be taken in a broad sense, i.e., unexpected, abnormal, singular, irregular, or unusual motion. Identifying distinctive dynamic information at any time point and at any image location in a sequence of images is a key requirement in many situations and applications. The proposed method relies on so-called labeled affine flows (LAF involving both affine velocity vectors and affine motion classes. At every pixel, a motion class is inferred from the affine motion model selected in a set of candidate models estimated over a collection of windows. Then, the image is subdivided in blocks where motion class histograms weighted by the affine motion vector magnitudes are computed. They are compared blockwise to histograms of normal behaviors with a dedicated distance. More specifically, we introduce the local outlier factor (LOF to detect anomalous blocks. LOF is a local flexible measure of the relative density of data points in a feature space, here the space of LAF histograms. By thresholding the LOF value, we can detect an anomalous motion pattern in any block at any time instant of the video sequence. The threshold value is automatically set in each block by means of statistical arguments. We report comparative experiments on several real video datasets, demonstrating that our method is highly competitive for the intricate task of detecting different types of anomalous motion in videos. Specifically, we obtain very competitive results on all the tested datasets: 99.2% AUC for UMN, 82.8% AUC for UCSD, and 95.73% accuracy for PETS 2009, at the frame level.

  10. Measuring target detection performance in paradigms with high event rates.

    Science.gov (United States)

    Bendixen, Alexandra; Andersen, Søren K

    2013-05-01

    Combining behavioral and neurophysiological measurements inevitably implies mutual constraints, such as when the neurophysiological measurement requires fast-paced stimulus presentation and hence the attribution of a behavioral response to a particular preceding stimulus becomes ambiguous. We develop and test a method for validly assessing behavioral detection performance in spite of this ambiguity. We examine four approaches taken in the literature to treat such situations. We analytically derive a new variant of computing the classical parameters of signal detection theory, hit and false alarm rates, adapted to fast-paced paradigms. Each of the previous approaches shows specific shortcomings (susceptibility towards response window choice, biased estimates of behavioral detection performance). Superior performance of our new approach is demonstrated for both simulated and empirical behavioral data. Further evidence is provided by reliable correspondence between behavioral performance and the N2b component as an electrophysiological indicator of target detection. The appropriateness of our approach is substantiated by both theoretical and empirical arguments. We demonstrate an easy-to-implement solution for measuring target detection performance independent of the rate of event presentation. Thus overcoming the measurement bias of previous approaches, our method will help to clarify the behavioral relevance of different measures of cortical activation. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Research on the video detection device in the invisible part of stay cable anchorage system

    Science.gov (United States)

    Cai, Lin; Deng, Nianchun; Xiao, Zexin

    2012-11-01

    The cables in anchorage zone of cable-stayed bridge are hidden within the embedded pipe, which leads to the difficulty for detecting the damage of the cables with visual inspection. We have built a detection device based on high-resolution video capture, realized the distance observing of invisible segment of stay cable and damage detection of outer surface of cable in the small volume. The system mainly consists of optical stents and precision mechanical support device, optical imaging system, lighting source, drived motor control and IP camera video capture system. The principal innovations of the device are ⑴A set of telescope objectives with three different focal lengths are designed and used in different distances of the monitors by means of converter. ⑵Lens system is far separated with lighting system, so that the imaging optical path could effectively avoid the harsh environment which would be in the invisible part of cables. The practice shows that the device not only can collect the clear surveillance video images of outer surface of cable effectively, but also has a broad application prospect in security warning of prestressed structures.

  12. Automatic Polyp Detection in Pillcam Colon 2 Capsule Images and Videos: Preliminary Feasibility Report

    Directory of Open Access Journals (Sweden)

    Pedro N. Figueiredo

    2011-01-01

    Full Text Available Background. The aim of this work is to present an automatic colorectal polyp detection scheme for capsule endoscopy. Methods. PillCam COLON2 capsule-based images and videos were used in our study. The database consists of full exam videos from five patients. The algorithm is based on the assumption that the polyps show up as a protrusion in the captured images and is expressed by means of a P-value, defined by geometrical features. Results. Seventeen PillCam COLON2 capsule videos are included, containing frames with polyps, flat lesions, diverticula, bubbles, and trash liquids. Polyps larger than 1 cm express a P-value higher than 2000, and 80% of the polyps show a P-value higher than 500. Diverticula, bubbles, trash liquids, and flat lesions were correctly interpreted by the algorithm as nonprotruding images. Conclusions. These preliminary results suggest that the proposed geometry-based polyp detection scheme works well, not only by allowing the detection of polyps but also by differentiating them from nonprotruding images found in the films.

  13. Authentication of Surveillance Videos: Detecting Frame Duplication Based on Residual Frame.

    Science.gov (United States)

    Fadl, Sondos M; Han, Qi; Li, Qiong

    2017-10-16

    Nowadays, surveillance systems are used to control crimes. Therefore, the authenticity of digital video increases the accuracy of deciding to admit the digital video as legal evidence or not. Inter-frame duplication forgery is the most common type of video forgery methods. However, many existing methods have been proposed for detecting this type of forgery and these methods require high computational time and impractical. In this study, we propose an efficient inter-frame duplication detection algorithm based on standard deviation of residual frames. Standard deviation of residual frame is applied to select some frames and ignore others, which represent a static scene. Then, the entropy of discrete cosine transform coefficients is calculated for each selected residual frame to represent its discriminating feature. Duplicated frames are then detected exactly using subsequence feature analysis. The experimental results demonstrated that the proposed method is effective to identify inter-frame duplication forgery with localization and acceptable running time. © 2017 American Academy of Forensic Sciences.

  14. Reports on Polysomnograph Combined with Long-term Video Electroencephalogram for Monitoring Nocturnal Sleep-breath Events in 82 Epileptic Patients

    Directory of Open Access Journals (Sweden)

    Hongliang Li

    2013-06-01

    Full Text Available Objective: To investigate the effects of epileptic discharges in sleep of epileptic patients on sleepbreath events. Methods: Polysomnograph (PSG and long-term video electroencephalogram (LTVEEG were used to monitor 82 adult epileptic patients. The condition of paroxysmal events in nocturnal sleep was analyzed, and the epileptiform discharge and effects of antiepileptic drugs were explored. Results: In epileptic group, latency to persistent sleep (LPS and REM sleep latency increased, the proportion of light sleep increased while that of deep sleep decreased, sleep efficiency reduced, nocturnal arousal times increased and apnea hyponea indexes (AHI improved, which demonstrated significant differences by comparison to control group. Periodic leg movements (PLM had no conspicuous differences compared with control group. There were no specific effects of epileptiform discharge and antiepileptic drugs on AHI and PLM indexes. Conclusion: Epileptic patients have sleep structure disorders and sleep-disordered breathing, and arousal, respiratory and leg movement events influence mutually. Synchronous detection of PSG combined with LTVEEG is in favor of comprehensively analyzing the relationship between sleep structures and epilepsy-breath events.

  15. Vision-based Detection of Acoustic Timed Events: a Case Study on Clarinet Note Onsets

    Science.gov (United States)

    Bazzica, A.; van Gemert, J. C.; Liem, C. C. S.; Hanjalic, A.

    2017-05-01

    Acoustic events often have a visual counterpart. Knowledge of visual information can aid the understanding of complex auditory scenes, even when only a stereo mixdown is available in the audio domain, \\eg identifying which musicians are playing in large musical ensembles. In this paper, we consider a vision-based approach to note onset detection. As a case study we focus on challenging, real-world clarinetist videos and carry out preliminary experiments on a 3D convolutional neural network based on multiple streams and purposely avoiding temporal pooling. We release an audiovisual dataset with 4.5 hours of clarinetist videos together with cleaned annotations which include about 36,000 onsets and the coordinates for a number of salient points and regions of interest. By performing several training trials on our dataset, we learned that the problem is challenging. We found that the CNN model is highly sensitive to the optimization algorithm and hyper-parameters, and that treating the problem as binary classification may prevent the joint optimization of precision and recall. To encourage further research, we publicly share our dataset, annotations and all models and detail which issues we came across during our preliminary experiments.

  16. The waveform correlation event detection system global prototype software design

    Energy Technology Data Exchange (ETDEWEB)

    Beiriger, J.I.; Moore, S.G.; Trujillo, J.R.; Young, C.J.

    1997-12-01

    The WCEDS prototype software system was developed to investigate the usefulness of waveform correlation methods for CTBT monitoring. The WCEDS prototype performs global seismic event detection and has been used in numerous experiments. This report documents the software system design, presenting an overview of the system operation, describing the system functions, tracing the information flow through the system, discussing the software structures, and describing the subsystem services and interactions. The effectiveness of the software design in meeting project objectives is considered, as well as opportunities for code refuse and lessons learned from the development process. The report concludes with recommendations for modifications and additions envisioned for regional waveform-correlation-based detector.

  17. Use of sonification in the detection of anomalous events

    Science.gov (United States)

    Ballora, Mark; Cole, Robert J.; Kruesi, Heidi; Greene, Herbert; Monahan, Ganesh; Hall, David L.

    2012-06-01

    In this paper, we describe the construction of a soundtrack that fuses stock market data with information taken from tweets. This soundtrack, or auditory display, presents the numerical and text data in such a way that anomalous events may be readily detected, even by untrained listeners. The soundtrack generation is flexible, allowing an individual listener to create a unique audio mix from the available information sources. Properly constructed, the display exploits the auditory system's sensitivities to periodicities, to dynamic changes, and to patterns. This type of display could be valuable in environments that demand high levels of situational awareness based on multiple sources of incoming information.

  18. Head movement compensation and multi-modal event detection in eye-tracking data for unconstrained head movements.

    Science.gov (United States)

    Larsson, Linnéa; Schwaller, Andrea; Nyström, Marcus; Stridh, Martin

    2016-12-01

    The complexity of analyzing eye-tracking signals increases as eye-trackers become more mobile. The signals from a mobile eye-tracker are recorded in relation to the head coordinate system and when the head and body move, the recorded eye-tracking signal is influenced by these movements, which render the subsequent event detection difficult. The purpose of the present paper is to develop a method that performs robust event detection in signals recorded using a mobile eye-tracker. The proposed method performs compensation of head movements recorded using an inertial measurement unit and employs a multi-modal event detection algorithm. The event detection algorithm is based on the head compensated eye-tracking signal combined with information about detected objects extracted from the scene camera of the mobile eye-tracker. The method is evaluated when participants are seated 2.6m in front of a big screen, and is therefore only valid for distant targets. The proposed method for head compensation decreases the standard deviation during intervals of fixations from 8° to 3.3° for eye-tracking signals recorded during large head movements. The multi-modal event detection algorithm outperforms both an existing algorithm (I-VDT) and the built-in-algorithm of the mobile eye-tracker with an average balanced accuracy, calculated over all types of eye movements, of 0.90, compared to 0.85 and 0.75, respectively for the compared algorithms. The proposed event detector that combines head movement compensation and information regarding detected objects in the scene video enables for improved classification of events in mobile eye-tracking data. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Understanding pharmacist decision making for adverse drug event (ADE) detection.

    Science.gov (United States)

    Phansalkar, Shobha; Hoffman, Jennifer M; Hurdle, John F; Patel, Vimla L

    2009-04-01

    Manual chart review is an effective but expensive method for adverse drug event (ADE) detection. Building an expert system capable of mimicking the human expert's decision pathway, to deduce the occurrence of an ADE, can improve efficiency and lower cost. As a first step to build such an expert system, this study explores pharmacist's decision-making processes for ADE detection. Think-aloud procedures were used to elicit verbalizations as pharmacists read through ADE case scenarios. Two types of information were extracted, firstly pharmacists' decision-making strategies regarding ADEs and secondly information regarding pharmacists' unmet information needs for ADE detection. Verbal protocols were recorded and analysed qualitatively to extract ADE information signals. Inter-reviewer agreement for classification of ADE information signals was calculated using Cohen's kappa. We extracted a total of 110 information signals, of which 73% consisted of information that was interpreted by the pharmacists from the case scenario and only about half (53%, n = 32) of the information signals were considered relevant for the detection of the ADEs. Excellent reliability was demonstrated between the reviewers for classifying signals. Fifty information signals regarding unmet information needs were extracted and grouped into themes based on the type of missing information. Pharmacists used a forward reasoning approach to make implicit deductions and validate hypotheses about possible ADEs. Verbal protocols also indicated that pharmacists' unmet information needs occurred frequently. Developing alerting systems that meet pharmacists' needs adequately will enhance their ability to reduce preventable ADEs, thus improving patient safety.

  20. Temporal Characteristics in Detecting Imminent Collision Events on Linear Trajectories

    Directory of Open Access Journals (Sweden)

    Rui Ni

    2011-05-01

    Full Text Available Previous research (Andersen & Kim, 2001 has shown that a linear trajectory collision event is specified by objects that expand and maintain a constant bearing (the object projected location in the visual field. In this research, we investigated the temporal characteristics in detecting such imminent collision events. Two experiments were conducted in which participants were presented with displays simulating a single approaching object in the scene while observers were either stationary or moving at one of the 3 speeds (24, 36, or 48 km/h. An object traveled for 9 seconds before colliding with or passing by the observer and the relative speed between object and observer remained constant. Participants were asked to report whether the object was on a collision path or not. In the first experiment, 3 seconds or 4 seconds of displays were presented that ended at the same 2-second time to contact (TTC position. In the second experiment, 3 seconds of displays were presented that ended at different TTC positions. Results show that observers were more accurate in collision detection in stationary condition than in motion. More importantly, results suggest that observers used information on bearing change rate to distinguish noncollision objects from collision objects.

  1. Fall Detection for Elderly from Partially Observed Depth-Map Video Sequences Based on View-Invariant Human Activity Representation

    Directory of Open Access Journals (Sweden)

    Rami Alazrai

    2017-03-01

    Full Text Available This paper presents a new approach for fall detection from partially-observed depth-map video sequences. The proposed approach utilizes the 3D skeletal joint positions obtained from the Microsoft Kinect sensor to build a view-invariant descriptor for human activity representation, called the motion-pose geometric descriptor (MPGD. Furthermore, we have developed a histogram-based representation (HBR based on the MPGD to construct a length-independent representation of the observed video subsequences. Using the constructed HBR, we formulate the fall detection problem as a posterior-maximization problem in which the posteriori probability for each observed video subsequence is estimated using a multi-class SVM (support vector machine classifier. Then, we combine the computed posteriori probabilities from all of the observed subsequences to obtain an overall class posteriori probability of the entire partially-observed depth-map video sequence. To evaluate the performance of the proposed approach, we have utilized the Kinect sensor to record a dataset of depth-map video sequences that simulates four fall-related activities of elderly people, including: walking, sitting, falling form standing and falling from sitting. Then, using the collected dataset, we have developed three evaluation scenarios based on the number of unobserved video subsequences in the testing videos, including: fully-observed video sequence scenario, single unobserved video subsequence of random lengths scenarios and two unobserved video subsequences of random lengths scenarios. Experimental results show that the proposed approach achieved an average recognition accuracy of 93 . 6 % , 77 . 6 % and 65 . 1 % , in recognizing the activities during the first, second and third evaluation scenario, respectively. These results demonstrate the feasibility of the proposed approach to detect falls from partially-observed videos.

  2. Establishing a Distance Learning Plan for International Space Station (ISS) Interactive Video Education Events (IVEE)

    Science.gov (United States)

    Wallington, Clint

    1999-01-01

    Educational outreach is an integral part of the International Space Station (ISS) mandate. In a few scant years, the International Space Station has already established a tradition of successful, general outreach activities. However, as the number of outreach events increased and began to reach school classrooms, those events came under greater scrutiny by the education community. Some of the ISS electronic field trips, while informative and helpful, did not meet the generally accepted criteria for education events, especially within the context of the classroom. To make classroom outreach events more acceptable to educators, the ISS outreach program must differentiate between communication events (meant to disseminate information to the general public) and education events (designed to facilitate student learning). In contrast to communication events, education events: are directed toward a relatively homogeneous audience who are gathered together for the purpose of learning, have specific performance objectives which the students are expected to master, include a method of assessing student performance, and include a series of structured activities that will help the students to master the desired skill(s). The core of the ISS education events is an interactive videoconference between students and ISS representatives. This interactive videoconference is to be preceded by and followed by classroom activities which help the students aftain the specified learning objectives. Using the interactive videoconference as the centerpiece of the education event lends a special excitement and allows students to ask questions about what they are learning and about the International Space Station and NASA. Whenever possible, the ISS outreach education events should be congruent with national guidelines for student achievement. ISS outreach staff should recognize that there are a number of different groups that will review the events, and that each group has different criteria

  3. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    National Research Council Canada - National Science Library

    Chang, Yuchou; Lee, DJ; Hong, Yi; Archibald, James

    .... In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection...

  4. Weighted symbolic analysis of human behavior for event detection

    Science.gov (United States)

    Rosani, A.; Boato, G.; De Natale, F. G. B.

    2013-03-01

    Automatic video analysis and understanding has become a high interest research topic, with applications to video browsing, content-based video indexing, and visual surveillance. However, the automation of this process is still a challenging task, due to clutters produced by low-level processing operations. This common problem can be solved by embedding signi cant contextual information into the data, as well as using simple syntactic approaches to perform the matching between actual sequences and models. In this context we propose a novel framework that employs a symbolic representation of complex activities through sequences of atomic actions based on a weighted Context-Free Grammar.

  5. Moving object detection in top-view aerial videos improved by image stacking

    Science.gov (United States)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  6. Research of Pedestrian Crossing Safety Facilities Based on the Video Detection

    Science.gov (United States)

    Li, Sheng-Zhen; Xie, Quan-Long; Zang, Xiao-Dong; Tang, Guo-Jun

    Since that the pedestrian crossing facilities at present is not perfect, pedestrian crossing is in chaos and pedestrians from opposite direction conflict and congest with each other, which severely affects the pedestrian traffic efficiency, obstructs the vehicle and bringing about some potential security problems. To solve these problems, based on video identification, a pedestrian crossing guidance system was researched and designed. It uses the camera to monitor the pedestrians in real time and sums up the number of pedestrians through video detection program, and a group of pedestrian's induction lamp array is installed at the interval of crosswalk, which adjusts color display according to the proportion of pedestrians from both sides to guide pedestrians from both opposite directions processing separately. The emulation analysis result from cellular automaton shows that the system reduces the pedestrian crossing conflict, shortens the time of pedestrian crossing and improves the safety of pedestrians crossing.

  7. A TBB-CUDA Implementation for Background Removal in a Video-Based Fire Detection System

    Directory of Open Access Journals (Sweden)

    Fan Wang

    2014-01-01

    Full Text Available This paper presents a parallel TBB-CUDA implementation for the acceleration of single-Gaussian distribution model, which is effective for background removal in the video-based fire detection system. In this framework, TBB mainly deals with initializing work of the estimated Gaussian model running on CPU, and CUDA performs background removal and adaption of the model running on GPU. This implementation can exploit the combined computation power of TBB-CUDA, which can be applied to the real-time environment. Over 220 video sequences are utilized in the experiments. The experimental results illustrate that TBB+CUDA can achieve a higher speedup than both TBB and CUDA. The proposed framework can effectively overcome the disadvantages of limited memory bandwidth and few execution units of CPU, and it reduces data transfer latency and memory latency between CPU and GPU.

  8. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  9. Detection of adverse drug events using an electronic trigger tool.

    Science.gov (United States)

    Lim, Dennison; Melucci, Joe; Rizer, Milisa K; Prier, Beth E; Weber, Robert J

    2016-09-01

    Implementation and refinement of an integrated electronic "trigger tool" for detecting adverse drug events (ADEs) is described. A three-month prospective study was conducted at a large medical center to test and improve the positive predictive value (PPV) of an electronic health record-based tool for detecting ADEs associated with use of four "trigger drugs": the reversal agents flumazenil, naloxone, phytonadione, and protamine. On administration of a trigger drug to an adult patient, an electronic message was transmitted to two pharmacists, who reviewed cases in near real time (typically, on the same day) to detect actual or potential ADEs. In phase 1 of the study, any use of a trigger drug resulted in an alert message; in subsequent phases, the alerting criteria were narrowed on the basis of clinical criteria and laboratory data with the goal of refining the trigger tool's PPV. A total of 87 drug administrations were reviewed during the three-month study period, with 27 ADEs detected. PPV values in phases 1, 2, and 3 were 0.33, 0.21, and 0.36, respectively. The relatively low overall PPV of the trigger tool was largely attributable to false-positive trigger messages associated with phytonadione use (such messages were reduced from 35 in phase 1 to 7 in phase 3). Evaluation and refinement of an electronic trigger tool based on detecting the use of the reversal agents flumazenil, naloxone, phytonadione, and protamine found an overall PPV of 0.31 during a three-month study period. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  10. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  11. Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection.

    Science.gov (United States)

    Sarikaya, Duygu; Corso, Jason J; Guru, Khurshid A

    2017-07-01

    Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.

  12. Automated Polyp Detection in Colonoscopy Videos Using Shape and Context Information.

    Science.gov (United States)

    Tajbakhsh, Nima; Gurudu, Suryakanth R; Liang, Jianming

    2016-02-01

    This paper presents the culmination of our research in designing a system for computer-aided detection (CAD) of polyps in colonoscopy videos. Our system is based on a hybrid context-shape approach, which utilizes context information to remove non-polyp structures and shape information to reliably localize polyps. Specifically, given a colonoscopy image, we first obtain a crude edge map. Second, we remove non-polyp edges from the edge map using our unique feature extraction and edge classification scheme. Third, we localize polyp candidates with probabilistic confidence scores in the refined edge maps using our novel voting scheme. The suggested CAD system has been tested using two public polyp databases, CVC-ColonDB, containing 300 colonoscopy images with a total of 300 polyp instances from 15 unique polyps, and ASU-Mayo database, which is our collection of colonoscopy videos containing 19,400 frames and a total of 5,200 polyp instances from 10 unique polyps. We have evaluated our system using free-response receiver operating characteristic (FROC) analysis. At 0.1 false positives per frame, our system achieves a sensitivity of 88.0% for CVC-ColonDB and a sensitivity of 48% for the ASU-Mayo database. In addition, we have evaluated our system using a new detection latency analysis where latency is defined as the time from the first appearance of a polyp in the colonoscopy video to the time of its first detection by our system. At 0.05 false positives per frame, our system yields a polyp detection latency of 0.3 seconds.

  13. Detecting Rare Events in the Time-Domain

    Energy Technology Data Exchange (ETDEWEB)

    Rest, A; Garg, A

    2008-10-31

    One of the biggest challenges in current and future time-domain surveys is to extract the objects of interest from the immense data stream. There are two aspects to achieving this goal: detecting variable sources and classifying them. Difference imaging provides an elegant technique for identifying new transients or changes in source brightness. Much progress has been made in recent years toward refining the process. We discuss a selection of pitfalls that can afflict an automated difference imagine pipeline and describe some solutions. After identifying true astrophysical variables, we are faced with the challenge of classifying them. For rare events, such as supernovae and microlensing, this challenge is magnified because we must balance having selection criteria that select for the largest number of objects of interest against a high contamination rate. We discuss considerations and techniques for developing classification schemes.

  14. Barometric pressure and triaxial accelerometry-based falls event detection.

    Science.gov (United States)

    Bianchi, Federico; Redmond, Stephen J; Narayanan, Michael R; Cerutti, Sergio; Lovell, Nigel H

    2010-12-01

    Falls and fall related injuries are a significant cause of morbidity, disability, and health care utilization, particularly among the age group of 65 years and over. The ability to detect falls events in an unsupervised manner would lead to improved prognoses for falls victims. Several wearable accelerometry and gyroscope-based falls detection devices have been described in the literature; however, they all suffer from unacceptable false positive rates. This paper investigates the augmentation of such systems with a barometric pressure sensor, as a surrogate measure of altitude, to assist in discriminating real fall events from normal activities of daily living. The acceleration and air pressure data are recorded using a wearable device attached to the subject's waist and analyzed offline. The study incorporates several protocols including simulated falls onto a mattress and simulated activities of daily living, in a cohort of 20 young healthy volunteers (12 male and 8 female; age: 23.7 ±3.0 years). A heuristically trained decision tree classifier is used to label suspected falls. The proposed system demonstrated considerable improvements in comparison to an existing accelerometry-based technique; showing an accuracy, sensitivity and specificity of 96.9%, 97.5%, and 96.5%, respectively, in the indoor environment, with no false positives generated during extended testing during activities of daily living. This is compared to 85.3%, 75%, and 91.5% for the same measures, respectively, when using accelerometry alone. The increased specificity of this system may enhance the usage of falls detectors among the elderly population.

  15. Tracking of Vehicle Movement on a Parking Lot Based on Video Detection

    Directory of Open Access Journals (Sweden)

    Ján HALGAŠ

    2014-06-01

    Full Text Available This article deals with topic of transport vehicles identification for dynamic and static transport based on video detection. It explains some of the technologies and approaches necessary for processing of specific image information (transport situation. The paper also describes a design of algorithm for vehicle detection on parking lot and consecutive record of trajectory into virtual environment. It shows a new approach to moving object detection (vehicles, people, and handlers on an enclosed area with emphasis on secure parking. The created application enables automatic identification of trajectory of specific objects moving within the parking area. The application was created in program language C++ with using an open source library OpenCV.

  16. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  17. Communication of ALS Patients by Detecting Event-Related Potential

    Science.gov (United States)

    Kanou, Naoyuki; Sakuma, Kenji; Nakashima, Kenji

    Amyotrophic Lateral Sclerosis(ALS) patients are unable to successfully communicate their desires, although their mental capacity is the same as non-affected persons. Therefore, the authors put emphasis on Event-Related Potential(ERP) which elicits the highest outcome for the target visual and hearing stimuli. P300 is one component of ERP. It is positive potential that is elicited when the subject focuses attention on stimuli that appears infrequently. In this paper, the authors focused on P200 and N200 components, in addition to P300, for their great improvement in the rate of correct judgment in the target word-specific experiment. Hence the authors propose the algorithm that specifies target words by detecting these three components. Ten healthy subjects and ALS patient underwent the experiment in which a target word out of five words, was specified by this algorithm. The rates of correct judgment in nine of ten healthy subjects were more than 90.0%. The highest rate was 99.7%. The highest rate of ALS patient was 100.0%. Through these results, the authors found the possibility that ALS patients could communicate with surrounding persons by detecting ERP(P200, N200 and P300) as their desire.

  18. Experiences of citizen-based reporting of rainfall events using lab-generated videos

    Science.gov (United States)

    Alfonso, Leonardo; Chacon, Juan

    2016-04-01

    Hydrologic studies rely on the availability of good-quality precipitation estimates. However, in remote areas of the world and particularly in developing countries, ground-based measurement networks are either sparse or nonexistent. This creates difficulties in the estimation of precipitation, which limits the development of hydrologic forecasting and early warning systems for these regions. The EC-FP7 WeSenseIt project aims at exploring the involvement of citizens in the observation of the water cycle with innovative sensor technologies, including mobile telephony. In particular, the project explores the use of a smartphone applications to facilitate the reporting water-related situations. Apart from the challenge of using such information for scientific purposes, the citizen engagement is one of the most important issues to address. To this end effortless methods for reporting need to be developed in order to involve as many people as possible in these experiments. A potential solution to overcome these drawbacks, consisting on lab-controlled rainfall videos have been produced to help mapping the extent and distribution of rainfall fields with minimum effort [1]. In addition, the quality of the collected rainfall information has also been studied [2] by means of different experiments with students. The present research shows the latest results of the application of this method and evaluates the experiences in some cases. [1] Alfonso, L., J. Chacón, and G. Peña-Castellanos (2015), Allowing Citizens to Effortlessly Become Rainfall Sensors, in 36th IAHR World Congress edited, The Hague, the Netherlands [2] Cortes-Arevalo, J., J. Chacón, L. Alfonso, and T. Bogaard (2015), Evaluating data quality collected by using a video rating scale to estimate and report rainfall intensity, in 36th IAHR World Congress edited, The Hague, the Netherlands

  19. Identification of new events in Apollo 16 lunar seismic data by Hidden Markov Model-based event detection and classification

    Science.gov (United States)

    Knapmeyer-Endrun, Brigitte; Hammer, Conny

    2015-10-01

    Detection and identification of interesting events in single-station seismic data with little prior knowledge and under tight time constraints is a typical scenario in planetary seismology. The Apollo lunar seismic data, with the only confirmed events recorded on any extraterrestrial body yet, provide a valuable test case. Here we present the application of a stochastic event detector and classifier to the data of station Apollo 16. Based on a single-waveform example for each event class and some hours of background noise, the system is trained to recognize deep moonquakes, impacts, and shallow moonquakes and performs reliably over 3 years of data. The algorithm's demonstrated ability to detect rare events and flag previously undefined signal classes as new event types is of particular interest in the analysis of the first seismic recordings from a completely new environment. We are able to classify more than 50% of previously unclassified lunar events, and additionally find over 200 new events not listed in the current lunar event catalog. These events include deep moonquakes as well as impacts and could be used to update studies on temporal variations in event rate or deep moonquakes stacks used in phase picking for localization. No unambiguous new shallow moonquake was detected, but application to data of the other Apollo stations has the potential for additional new discoveries 40 years after the data were recorded. Besides, the classification system could be useful for future seismometer missions to other planets, e.g., the InSight mission to Mars.

  20. Event detection in athletics for personalized sports content delivery

    DEFF Research Database (Denmark)

    Katsarakis, N.; Pnevmatikakis, A.

    2009-01-01

    Broadcasting of athletics is nowadays biased towards running (sprint and longer distances) sports. Personalized content delivery can change that for users that wish to focus on different content. Using a combination of video signal processing algorithms and live information that accompanies...... the video of large-scale sports like the Olympics, a system can attend to the preferences of users by selecting the most suitable camera view for them.There are two types of camera selection for personalized content delivery. According to the between sport camera selection, the view is changed between two...... sports, upon the onset of a sport higher up the user preferences than the one currently being delivered. According to the within sport camera selection, the camera is changed to offer a better view of the evolution of the sport, based on the phase it is in. This paper details the video processing...

  1. Balloon-Borne Infrasound Detection of Energetic Bolide Events

    Science.gov (United States)

    Young, Eliot F.; Ballard, Courtney; Klein, Viliam; Bowman, Daniel; Boslough, Mark

    2016-10-01

    Infrasound is usually defined as sound waves below 20 Hz, the nominal limit of human hearing. Infrasound waves propagate over vast distances through the Earth's atmosphere: the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization) has 48 installed infrasound-sensing stations around the world to detect nuclear detonations and other disturbances. In February 2013, several CTBTO infrasound stations detected infrasound signals from a large bolide that exploded over Chelyabinsk, Russia. Some stations recorded signals that had circumnavigated the Earth, over a day after the original event. The goal of this project is to improve upon the sensitivity of the CTBTO network by putting microphones on small, long-duration super-pressure balloons, with the overarching goal of studying the small end of the NEO population by using the Earth's atmosphere as a witness plate.A balloon-borne infrasound sensor is expected to have two advantages over ground-based stations: a lack of wind noise and a concentration of infrasound energy in the "stratospheric duct" between roughly 5 - 50 km altitude. To test these advantages, we have built a small balloon payload with five calibrated microphones. We plan to fly this payload on a NASA high-altitude balloon from Ft Sumner, NM in August 2016. We have arranged for three large explosions to take place in Socorro, NM while the balloon is aloft to assess the sensitivity of balloon-borne vs. ground-based infrasound sensors. We will report on the results from this test flight and the prospects for detecting/characterizing small bolides in the stratosphere.

  2. Events leading to anterior cruciate ligament injury in World Cup Alpine Skiing: a systematic video analysis of 20 cases.

    Science.gov (United States)

    Bere, Tone; Flørenes, Tonje Wåle; Krosshaug, Tron; Nordsletten, Lars; Bahr, Roald

    2011-12-01

    The authors have recently identified three main mechanisms for anterior cruciate ligament (ACL) injuries among World Cup (WC) alpine skiers, termed as "the slip-catch", "the landing back-weighted" and "the dynamic snowplow". However, for a more complete understanding of how these injuries occur, a description of the events leading to the injury situations is also needed. To describe the skiing situation leading to ACL injuries in WC alpine skiing. Twenty cases of ACL injuries reported through the International Ski Federation Injury Surveillance System (FIS ISS)for three consecutive WC seasons (2006-2009) were obtained on video. Ten experts (9 WC coaches, 1 former WC athlete) performed visual analyses of each case to describe in their own words, factors they thought may have contributed to the injury situation related to different predefined categories: (1) skier technique, (2) skier strategy, (3) equipment, (4) speed and course setting, (5) visibility, snow and piste conditions and (6) any other factors. Factors related to the three categories, namely skier technique, skier strategy, and visibility, snow and piste conditions, were assumed to be the main contributors to the injury situations. Skier errors, technical mistakes and inappropriate tactical choices, were the dominant factors. In addition, bumpy conditions, aggressive snow, reduced visibility and course difficulties were assumed to contribute. Based on this systematic video analysis of 20 injury situations, factors related to skier technique, skier strategy and specific race conditions were identified as the main contributors leading to injury situations.

  3. An Effective Method for Small Event Detection: Match and Locate (ML) and Its Applications

    Science.gov (United States)

    Zhang, M.; Wen, L.

    2014-12-01

    Detection of low magnitude event is critical and challenging in seismology. Traditional methods of event detection, which rely on phase identification, are usually hindered by low signal to noise ratio (SNR) in small event recordings. We develop a new method, named the match and locate (ML) method, for small event detection. The ML method employs some template events and detects small events through stacking cross-correlograms between waveforms of the template events and potential small event signals in the continuous waveforms over multiple stations and components. Unlike the traditional match filter method which assumes that the template event and slave event are co-located, the ML method scans over potential small event locations around the template, by making relative travel time corrections based on the relative locations of the template event and the potential small event before stacking. It makes event detection more efficient and at the same time relocates the detected event in high-precision. As an example of application and comparison with the matched filter method, we apply the ML and matched filter methods to detect the foreshocks before the 2011 Mw 9.0 Tohoku earthquake. The ML method detects four times more events than the templates and 10% more than the matched filter under the same detection threshold. Up to 42% of the events detected by the ML method are not co-located at the template locations with the largest event separation of 9.4 km. As another example of application, we apply the ML method to search for potential nuclear tests conducted by North Korea in the continuous seismic data recorded in Northeast China, using North Korea's 2009 and 2013 tests as templates. We report detection of a low-yield nuclear test conducted by North Korea in 2010.

  4. DETECT: a MATLAB toolbox for event detection and identification in time series, with applications to artifact detection in EEG signals.

    Directory of Open Access Journals (Sweden)

    Vernon Lawhern

    Full Text Available Recent advances in sensor and recording technology have allowed scientists to acquire very large time-series datasets. Researchers often analyze these datasets in the context of events, which are intervals of time where the properties of the signal change relative to a baseline signal. We have developed DETECT, a MATLAB toolbox for detecting event time intervals in long, multi-channel time series. Our primary goal is to produce a toolbox that is simple for researchers to use, allowing them to quickly train a model on multiple classes of events, assess the accuracy of the model, and determine how closely the results agree with their own manual identification of events without requiring extensive programming knowledge or machine learning experience. As an illustration, we discuss application of the DETECT toolbox for detecting signal artifacts found in continuous multi-channel EEG recordings and show the functionality of the tools found in the toolbox. We also discuss the application of DETECT for identifying irregular heartbeat waveforms found in electrocardiogram (ECG data as an additional illustration.

  5. Multiple Moving Object Detection for Fast Video Content Description in Compressed Domain

    Directory of Open Access Journals (Sweden)

    Boris Mansencal

    2007-11-01

    Full Text Available Indexing deals with the automatic extraction of information with the objective of automatically describing and organizing the content. Thinking of a video stream, different types of information can be considered semantically important. Since we can assume that the most relevant one is linked to the presence of moving foreground objects, their number, their shape, and their appearance can constitute a good mean for content description. For this reason, we propose to combine both motion information and region-based color segmentation to extract moving objects from an MPEG2 compressed video stream starting only considering low-resolution data. This approach, which we refer to as “rough indexing,” consists in processing P-frame motion information first, and then in performing I-frame color segmentation. Next, since many details can be lost due to the low-resolution data, to improve the object detection results, a novel spatiotemporal filtering has been developed which is constituted by a quadric surface modeling the object trace along time. This method enables to effectively correct possible former detection errors without heavily increasing the computational effort.

  6. Event-based home safety problem detection under the CPS home safety architecture

    OpenAIRE

    Yang, Zhengguo; Lim, Azman Osman; Tan, Yasuo

    2013-01-01

    This paper presents a CPS(Cyber-physical System) home safety architecture for home safety problem detection and reaction and shows some example cases. In order for home safety problem detection, there are three levels of events defined: elementary event, semantic event and entire event, which representing the meaning from parameter to single safety problem, and then the whole safety status of a house. For the relationship between these events and raw data, a Finite State Machine (FSM) based m...

  7. ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.

    Science.gov (United States)

    Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K

    2012-06-01

    In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.

  8. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support ...

  9. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Back Support Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo ... Support Groups Back Is a support group for me? Find a group Back Upcoming events Video Library ...

  10. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork ... for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ...

  11. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  12. Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

    Science.gov (United States)

    Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang

    2011-01-01

    This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990

  13. Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

    Directory of Open Access Journals (Sweden)

    Yang-Lang Chang

    2011-07-01

    Full Text Available This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions.

  14. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  15. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  16. Endoscopic trimodal imaging detects colonic neoplasia as well as standard video endoscopy.

    Science.gov (United States)

    Kuiper, Teaco; van den Broek, Frank J C; Naber, Anton H; van Soest, Ellert J; Scholten, Pieter; Mallant-Hent, Rosalie Ch; van den Brande, Jan; Jansen, Jeroen M; van Oijen, Arnoud H A M; Marsman, Willem A; Bergman, Jacques J G H M; Fockens, Paul; Dekker, Evelien

    2011-06-01

    Endoscopic trimodal imaging (ETMI) is a novel endoscopic technique that combines high-resolution endoscopy (HRE), autofluorescence imaging (AFI), and narrow-band imaging (NBI) that has only been studied in academic settings. We performed a randomized, controlled trial in a nonacademic setting to compare ETMI with standard video endoscopy (SVE) in the detection and differentiation of colorectal lesions. The study included 234 patients scheduled to receive colonoscopy who were randomly assigned to undergo a colonoscopy in tandem with either ETMI or SVE. In the ETMI group (n=118), first examination was performed using HRE, followed by AFI. In the other group, both examinations were performed using SVE (n=116). In the ETMI group, detected lesions were differentiated using AFI and NBI. In the ETMI group, 87 adenomas were detected in the first examination (with HRE), and then 34 adenomas were detected during second inspection (with AFI). In the SVE group, 79 adenomas were detected during the first inspection, and then 33 adenomas were detected during the second inspection. Adenoma detection rates did not differ significantly between the 2 groups (ETMI: 1.03 vs SVE: 0.97, P=.360). The adenoma miss-rate was 29% for HRE and 28% for SVE. The sensitivity, specificity, and accuracy of NBI in differentiating adenomas from nonadenomatous lesions were 87%, 63%, and 75%, respectively; corresponding values for AFI were 90%, 37%, and 62%, respectively. In a nonacademic setting, ETMI did not improve the detection rate for adenomas compared with SVE. NBI and AFI each differentiated colonic lesions with high levels of sensitivity but low levels of specificity. Copyright © 2011 AGA Institute. Published by Elsevier Inc. All rights reserved.

  17. Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.

    Science.gov (United States)

    Nees, Michael A; Helbein, Benji; Porter, Anna

    2016-05-01

    Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.

  18. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  19. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  20. Comparison Of Processing Time Of Different Size Of Images And Video Resolutions For Object Detection Using Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Yogesh Yadav

    2017-01-01

    Full Text Available Object Detection with small computation cost and processing time is a necessity in diverse domains such as traffic analysis security cameras video surveillance etc .With current advances in technology and decrease in prices of image sensors and video cameras the resolution of captured images is more than 1MP and has higher frame rates. This implies a considerable data size that needs to be processed in a very short period of time when real-time operations and data processing is needed. Real time video processing with high performance can be achieved with GPU technology. The aim of this study is to evaluate the influence of different image and video resolutions on the processing time number of objects detections and accuracy of the detected object. MOG2 algorithm is used for processing video input data with GPU module. Fuzzy interference system is used to evaluate the accuracy of number of detected object and to show the difference between CPU and GPU computing methods.

  1. A Multi-view Approach for Detecting Non-Cooperative Users in Online Video Sharing Systems

    OpenAIRE

    Langbehn, Hendrickson Reiter; Ricci, Saulo M. R.; Gonçalves, Marcos A.; Almeida, Jussara Marques; Pappa, Gisele Lobo; Benevenuto, Fabrício

    2010-01-01

    Most online video sharing systems (OVSSs), such as YouTube and Yahoo! Video, have several mechanisms for supporting interactions among users. One such mechanism is the  video response feature in YouTube, which allows a user to post a video in response to another video. While increasingly popular, the video response feature opens the opportunity for non-cooperative users to  introduce  ``content pollution'' into the system, thus causing loss of service effectiveness and credibility as w...

  2. Advantages of respiratory monitoring during video-EEG evaluation to differentiate epileptic seizures from other events

    Science.gov (United States)

    Pavlova, Milena; Abdennadher, Myriam; Singh, Kanwaljit; Katz, Eliot; Llewellyn, Nichelle; Zarowsly, Marcin; White, David P.; Dworetzky, Barbara A.; Kothare, Sanjeev V.

    2014-01-01

    Distinction between epileptic (ES) and seizure-like events of non-epileptic nature(SLNE) is often difficult using descriptions of seizure semiology. Cardiopulmonary dysfunction is frequent in ES but has not been objectively examined in relationship to SLNE. Our purpose was to compare cardiopulmonary dysfunction between ES and SLNE. We prospectively recorded cardio-pulmonary function using pulse-oximetry, EKG and respiratory inductance plethysmography (RIP) in 52 ES and 22 SLNE. Comparison of cardiopulmonary complications between ES and SLNE was done using two-sample t-tests and logistic regression. Ictal bradypnea and pre-ictal bradycardia were more frequent in ES than SLNE (p1.0). Cardio-respiratory dysfunction, specifically bradypnea, apnea, pre-ictal bradycardia, and oxygen desaturation, is more frequently seen in ES than in SLNE. Tachycardia was not discriminant between ES and SLNE. PMID:24561659

  3. Investigation on effectiveness of mid-level feature representation for semantic boundary detection in news video

    Science.gov (United States)

    Radhakrishan, Regunathan; Xiong, Ziyou; Divakaran, Ajay; Raj, Bhiksha

    2003-11-01

    In our past work, we have attempted to use a mid-level feature namely the state population histogram obtained from the Hidden Markov Model (HMM) of a general sound class, for speaker change detection so as to extract semantic boundaries in broadcast news. In this paper, we compare the performance of our previous approach with another approach based on video shot detection and speaker change detection using the Bayesian Information Criterion (BIC). Our experiments show that the latter approach performs significantly better than the former. This motivated us to examine the mid-level feature closely. We found that the component population histogram enabled discovery of broad phonetic categories such as vowels, nasals, fricatives etc, regardless of the number of distinct speakers in the test utterance. In order for it to be useful for speaker change detection, the individual components should model the phonetic sounds of each speaker separately. From our experiments, we conclude that state/component population histograms can only be useful for further clustering or semantic class discovery if the features are chosen carefully so that the individual states represent the semantic categories of interest.

  4. Automated Detection of Financial Events in News Text

    NARCIS (Netherlands)

    F.P. Hogenboom (Frederik)

    2014-01-01

    markdownabstractToday’s financial markets are inextricably linked with financial events like acquisitions, profit announcements, or product launches. Information extracted from news messages that report on such events could hence be beneficial for financial decision making. The ubiquity of news,

  5. Unsupervised video-based lane detection using location-enhanced topic models

    Science.gov (United States)

    Sun, Hao; Wang, Cheng; Wang, Boliang; El-Sheimy, Naser

    2010-10-01

    An unsupervised learning algorithm based on topic models is presented for lane detection in video sequences observed by uncalibrated moving cameras. Our contributions are twofold. First, we introduce the maximally stable extremal region (MSER) detector for lane-marking feature extraction and derive a novel shape descriptor in an affine invariant manner to describe region shapes and a modified scale-invariant feature transform descriptor to capture feature appearance characteristics. MSER features are more stable compared to edge points or line pairs and hence provide robustness to lane-marking variations in scale, lighting, viewpoint, and shadows. Second, we proposed a novel location-enhanced probabilistic latent semantic analysis (pLSA) topic model for simultaneous lane recognition and localization. The proposed model overcomes the limitation of a pLSA model for effective topic localization. Experimental results on traffic sequences in various scenarios demonstrate the effectiveness and robustness of the proposed method.

  6. Event Detection Challenges, Methods, and Applications in Natural and Artificial Systems

    Science.gov (United States)

    2009-03-01

    Sauvageon, Agogino, Mehr, and Tumer [2006], for instance, use a fourth degree polynomial within an event detection algorithm to sense high...AM, Mehr AF, and Tumer IY. 2006. “Comparison of Event Detection Methods for Centralized Sensor Networks.” IEEE Sensors Applications Symposium 2006...div898/handbook/index.htm>. • Sauvageon J, Agogino AM, Mehr AF, and Tumer IY. 2006. “Comparison of Event Detection Methods for Centralized Sensor

  7. An On-Line Method for Thermal Diffusivity Detection of Thin Films Using Infrared Video

    Directory of Open Access Journals (Sweden)

    Dong Huilong

    2016-03-01

    Full Text Available A novel method for thermal diffusivity evolution of thin-film materials with pulsed Gaussian beam and infrared video is reported. Compared with common pulse methods performed in specialized labs, the proposed method implements a rapid on-line measurement without producing the off-centre detection error. Through mathematical deduction of the original heat conduction model, it is discovered that the area s, which is encircled by the maximum temperature curve rTMAX(θ, increases linearly over elapsed time. The thermal diffusivity is acquired from the growth rate of the area s. In this study, the off-centre detection error is avoided by performing the distance regularized level set evolution formulation. The area s was extracted from the binary images of temperature variation rate, without inducing errors from determination of the heat source centre. Thermal diffusivities of three materials, 304 stainless steel, titanium, and zirconium have been measured with the established on-line detection system, and the measurement errors are: −2.26%, −1.07%, and 1.61% respectively.

  8. Classifying smoke in laparoscopic videos using SVM

    Directory of Open Access Journals (Sweden)

    Alshirbaji Tamer Abdulbaki

    2017-09-01

    Full Text Available Smoke in laparoscopic videos usually appears due to the use of electrocautery when cutting or coagulating tissues. Therefore, detecting smoke can be used for event-based annotation in laparoscopic surgeries by retrieving the events associated with the electrocauterization. Furthermore, smoke detection can also be used for automatic smoke removal. However, detecting smoke in laparoscopic video is a challenge because of the changeability of smoke patterns, the moving camera and the different lighting conditions. In this paper, we present a video-based smoke detection algorithm to detect smoke of different densities such as fog, low and high density in laparoscopic videos. The proposed method depends on extracting various visual features from the laparoscopic images and providing them to support vector machine (SVM classifier. Features are based on motion, colour and texture patterns of the smoke. We validated our algorithm using experimental evaluation on four laparoscopic cholecystectomy videos. These four videos were manually annotated by defining every frame as smoke or non-smoke frame. The algorithm was applied to the videos by using different feature combinations for classification. Experimental results show that the combination of all proposed features gives the best classification performance. The overall accuracy (i.e. correctly classified frames is around 84%, with the sensitivity (i.e. correctly detected smoke frames and the specificity (i.e. correctly detected non-smoke frames are 89% and 80%, respectively.

  9. Setting objective thresholds for rare event detection in flow cytometry

    OpenAIRE

    Richards, Adam J.; Staats, Janet; Enzor, Jennifer; McKinnon, Katherine; Frelinger, Jacob; Denny, Thomas N.; Weinhold, Kent J.; Chan, Cliburn

    2014-01-01

    The accurate identification of rare antigen-specific cytokine positive cells from peripheral blood mononuclear cells (PBMC) after antigenic stimulation in an intracellular staining (ICS) flow cytometry assay is challenging, as cytokine positive events may be fairly diffusely distributed and lack an obvious separation from the negative population. Traditionally, the approach by flow operators has been to manually set a positivity threshold to partition events into cytokine-positive and cytokin...

  10. A Review on Video/Image Authentication and Tamper Detection Techniques

    Science.gov (United States)

    Parmar, Zarna; Upadhyay, Saurabh

    2013-02-01

    With the innovations and development in sophisticated video editing technology and a wide spread of video information and services in our society, it is becoming increasingly significant to assure the trustworthiness of video information. Therefore in surveillance, medical and various other fields, video contents must be protected against attempt to manipulate them. Such malicious alterations could affect the decisions based on these videos. A lot of techniques are proposed by various researchers in the literature that assure the authenticity of video information in their own way. In this paper we present a brief survey on video authentication techniques with their classification. These authentication techniques are generally classified into following categories: digital signature based techniques, watermark based techniques, and other authentication techniques.

  11. Energy-Efficient Fault-Tolerant Dynamic Event Region Detection in Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Enemark, Hans-Jacob; Zhang, Yue; Dragoni, Nicola

    2015-01-01

    Fault-tolerant event detection is fundamental to wireless sensor network applications. Existing approaches usually adopt neighborhood collaboration for better detection accuracy, while need more energy consumption due to communication. Focusing on energy efficiency, this paper makes an improvement...

  12. Signal detection to identify serious adverse events (neuropsychiatric events in travelers taking mefloquine for chemoprophylaxis of malaria

    Directory of Open Access Journals (Sweden)

    Naing C

    2012-08-01

    Full Text Available Cho Naing,1,3 Kyan Aung,1 Syed Imran Ahmed,2 Joon Wah Mak31School of Medical Sciences, 2School of Pharmacy and Health Sciences, 3School of Postgraduate Studies and Research, International Medical University, Kuala Lumpur, MalaysiaBackground: For all medications, there is a trade-off between benefits and potential for harm. It is important for patient safety to detect drug-event combinations and analyze by appropriate statistical methods. Mefloquine is used as chemoprophylaxis for travelers going to regions with known chloroquine-resistant Plasmodium falciparum malaria. As such, there is a concern about serious adverse events associated with mefloquine chemoprophylaxis. The objective of the present study was to assess whether any signal would be detected for the serious adverse events of mefloquine, based on data in clinicoepidemiological studies.Materials and methods: We extracted data on adverse events related to mefloquine chemoprophylaxis from the two published datasets. Disproportionality reporting of adverse events such as neuropsychiatric events and other adverse events was presented in the 2 × 2 contingency table. Reporting odds ratio and corresponding 95% confidence interval [CI] data-mining algorithm was applied for the signal detection. The safety signals are considered significant when the ROR estimates and the lower limits of the corresponding 95% CI are ≥2.Results: Two datasets addressing adverse events of mefloquine chemoprophylaxis (one from a published article and one from a Cochrane systematic review were included for analyses. Reporting odds ratio 1.58, 95% CI: 1.49–1.68 based on published data in the selected article, and 1.195, 95% CI: 0.94–1.44 based on data in the selected Cochrane review. Overall, in both datasets, the reporting odds ratio values of lower 95% CI were less than 2.Conclusion: Based on available data, findings suggested that signals for serious adverse events pertinent to neuropsychiatric event were

  13. Network hydraulics inclusion in water quality event detection using multiple sensor stations data.

    Science.gov (United States)

    Oliker, Nurit; Ostfeld, Avi

    2015-09-01

    Event detection is one of the current most challenging topics in water distribution systems analysis: how regular on-line hydraulic (e.g., pressure, flow) and water quality (e.g., pH, residual chlorine, turbidity) measurements at different network locations can be efficiently utilized to detect water quality contamination events. This study describes an integrated event detection model which combines multiple sensor stations data with network hydraulics. To date event detection modelling is likely limited to single sensor station location and dataset. Single sensor station models are detached from network hydraulics insights and as a result might be significantly exposed to false positive alarms. This work is aimed at decreasing this limitation through integrating local and spatial hydraulic data understanding into an event detection model. The spatial analysis complements the local event detection effort through discovering events with lower signatures by exploring the sensors mutual hydraulic influences. The unique contribution of this study is in incorporating hydraulic simulation information into the overall event detection process of spatially distributed sensors. The methodology is demonstrated on two example applications using base runs and sensitivity analyses. Results show a clear advantage of the suggested model over single-sensor event detection schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features.

    Science.gov (United States)

    Billah, Mustain; Waheed, Sajjad; Rahman, Mohammad Motiur

    2017-01-01

    Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.

  15. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features

    Science.gov (United States)

    Waheed, Sajjad; Rahman, Mohammad Motiur

    2017-01-01

    Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%. PMID:28894460

  16. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features

    Directory of Open Access Journals (Sweden)

    Mustain Billah

    2017-01-01

    Full Text Available Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW features and convolutional neural network (CNN features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM. Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.

  17. SMART VIDEO SURVEILLANCE SYSTEM FOR VEHICLE DETECTION AND TRAFFIC FLOW CONTROL

    Directory of Open Access Journals (Sweden)

    A. A. SHAFIE

    2011-08-01

    Full Text Available Traffic signal light can be optimized using vehicle flow statistics obtained by Smart Video Surveillance Software (SVSS. This research focuses on efficient traffic control system by detecting and counting the vehicle numbers at various times and locations. At present, one of the biggest problems in the main city in any country is the traffic jam during office hour and office break hour. Sometimes it can be seen that the traffic signal green light is still ON even though there is no vehicle coming. Similarly, it is also observed that long queues of vehicles are waiting even though the road is empty due to traffic signal light selection without proper investigation on vehicle flow. This can be handled by adjusting the vehicle passing time implementing by our developed SVSS. A number of experiment results of vehicle flows are discussed in this research graphically in order to test the feasibility of the developed system. Finally, adoptive background model is proposed in SVSS in order to successfully detect target objects such as motor bike, car, bus, etc.

  18. A 3-Step Algorithm Using Region-Based Active Contours for Video Objects Detection

    Directory of Open Access Journals (Sweden)

    Stéphanie Jehan-Besson

    2002-06-01

    Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.

  19. A Study on Double Event Detection for PHENIX at RHIC

    Science.gov (United States)

    Vazquez-Carson, Sebastian; Phenix Collaboration

    2016-09-01

    Many measurements made in Heavy Ion experiments such as PHENIX at RHIC focus on geometrical properties because phenomena such as collective flow give insight into quark-gluon plasma and the strong nuclear force. As part of this investigation, PHENIX has taken data in 2016 for deuteron on gold collisions at several energies. An acceptable luminosity is achieved by injecting up to 120 separate bunches each with billions of ions into the storage ring, from which two, separate beams are made to collide. This method has a drawback as there is a chance for multiple pairs of nuclei to collide in a single bunch crossing. Data taken in a double event cannot be separated into two independent events and has no clear interpretation. This effect's magnitude is estimated and incorporated in published results as a systematic uncertainty and studies on this topic have already been conducted within PHENIX. I develop several additional algorithms to flag multiple interaction events by examining the time dependence of data from the two Beam-Beam Counters - detectors surrounding the beam pipe on opposite ends of the interaction region. The algorithms are tested with data, in which events with double interactions are artificially produced using low luminosity data. I am working at the University of Colorado at Boulder on behalf of the PHENIX collaboration.

  20. Progress in air shower radio measurements : detection of distant events

    NARCIS (Netherlands)

    Bähren, L.; Buitink, S.J.; Falcke, H.D.E.; Horneffer, K.H.A.; Kuijpers, J.M.E.; Lafebre, S.J.; Nigl, A.; Petrovic, J.; Singh, K.

    2006-01-01

    Data taken during half a year of operation of 10 LOPES antennas (LOPES-10), triggered by EAS observed with KASCADE-Grande have been analysed. We report about the analysis of correlations of radio signals measured by LOPES-10 with extensive air shower events reconstructed by KASCADE-Grande, including

  1. Event detection using population-based health care databases in randomized clinical trials

    DEFF Research Database (Denmark)

    Thuesen, Leif; Jensen, Lisette Okkels; Tilsted, Hans Henrik

    2013-01-01

    To describe a new research tool, designed to reflect routine clinical practice and relying on population-based health care databases to detect clinical events in randomized clinical trials.......To describe a new research tool, designed to reflect routine clinical practice and relying on population-based health care databases to detect clinical events in randomized clinical trials....

  2. Setting objective thresholds for rare event detection in flow cytometry.

    Science.gov (United States)

    Richards, Adam J; Staats, Janet; Enzor, Jennifer; McKinnon, Katherine; Frelinger, Jacob; Denny, Thomas N; Weinhold, Kent J; Chan, Cliburn

    2014-07-01

    The accurate identification of rare antigen-specific cytokine positive cells from peripheral blood mononuclear cells (PBMC) after antigenic stimulation in an intracellular staining (ICS) flow cytometry assay is challenging, as cytokine positive events may be fairly diffusely distributed and lack an obvious separation from the negative population. Traditionally, the approach by flow operators has been to manually set a positivity threshold to partition events into cytokine-positive and cytokine-negative. This approach suffers from subjectivity and inconsistency across different flow operators. The use of statistical clustering methods does not remove the need to find an objective threshold between between positive and negative events since consistent identification of rare event subsets is highly challenging for automated algorithms, especially when there is distributional overlap between the positive and negative events ("smear"). We present a new approach, based on the Fβ measure, that is similar to manual thresholding in providing a hard cutoff, but has the advantage of being determined objectively. The performance of this algorithm is compared with results obtained by expert visual gating. Several ICS data sets from the External Quality Assurance Program Oversight Laboratory (EQAPOL) proficiency program were used to make the comparisons. We first show that visually determined thresholds are difficult to reproduce and pose a problem when comparing results across operators or laboratories, as well as problems that occur with the use of commonly employed clustering algorithms. In contrast, a single parameterization for the Fβ method performs consistently across different centers, samples, and instruments because it optimizes the precision/recall tradeoff by using both negative and positive controls. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. A System for Reflective Learning Using Handwriting Tablet Devices for Real-Time Event Bookmarking into Simultaneously Recorded Videos

    Science.gov (United States)

    Nakajima, Taira

    2012-01-01

    The author demonstrates a new system useful for reflective learning. Our new system offers an environment that one can use handwriting tablet devices to bookmark symbolic and descriptive feedbacks into simultaneously recorded videos in the environment. If one uses video recording and feedback check sheets in reflective learning sessions, one can…

  4. Filtering Video Noise as Audio with Motion Detection to Form a Musical Instrument

    OpenAIRE

    Thomé, Carl

    2016-01-01

    Even though they differ in the physical domain, digital video and audio share many characteristics. Both are temporal data streams often stored in buffers with 8-bit values. This paper investigates a method for creating harmonic sounds with a video signal as input. A musical instrument is proposed, that utilizes video in both a sound synthesis method, and in a controller interface for selecting musical notes at specific velocities. The resulting instrument was informally determined by the aut...

  5. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  6. Signal Detection of Imipenem Compared to Other Drugs from Korea Adverse Event Reporting System Database

    OpenAIRE

    Park, Kyounghoon; Soukavong, Mick; Kim, Jungmee; Kwon, Kyoung-Eun; Jin, Xue-Mei; Lee, Joongyub; Yang, Bo Ram; Park, Byung-Joo

    2017-01-01

    Purpose To detect signals of adverse drug events after imipenem treatment using the Korea Institute of Drug Safety & Risk Management-Korea adverse event reporting system database (KIDS-KD). Materials and Methods We performed data mining using KIDS-KD, which was constructed using spontaneously reported adverse event (AE) reports between December 1988 and June 2014. We detected signals calculated the proportional reporting ratio, reporting odds ratio, and information component of imipenem. We d...

  7. Seizure clusters and adverse events during pre-surgical video-EEG monitoring with a slow anti-epileptic drug (AED) taper.

    Science.gov (United States)

    Di Gennaro, Giancarlo; Picardi, Angelo; Sparano, Antonio; Mascia, Addolorata; Meldolesi, Giulio N; Grammaldo, Liliana G; Esposito, Vincenzo; Quarato, Pier P

    2012-03-01

    To evaluate the efficiency and safety of pre-surgical video-EEG monitoring with a slow anti-epileptic drug (AED) taper and a rescue benzodiazepine protocol. Fifty-four consecutive patients with refractory focal epilepsy who underwent pre-surgical video-electroencephalography (EEG) monitoring during the year 2010 were included in the study. Time to first seizure, duration of monitoring, incidence of 4-h and 24-h seizure clustering, secondarily generalised tonic-clonic seizures (sGTCS), status epilepticus, falls and cardiac asystole were evaluated. A total of 190 seizures were recorded. Six (11%) patients had 4-h clusters and 21 (39%) patients had 24-h clusters. While 15 sGTCS were recorded in 14 patients (26%), status epilepticus did not occur and no seizure was complicated with cardiac asystole. Epileptic falls with no significant injuries occurred in three patients. The mean time to first seizure was 3.3days and the time to conclude video-EEG monitoring averaged 6days. Seizure clustering was common during pre-surgical video-EEG monitoring, although serious adverse events were rare with a slow AED tapering and a rescue benzodiazepine protocol. Slow AED taper pre-surgical video-EEG monitoring is fairly safe when performed in a highly specialised and supervised hospital setting. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Biomedical event trigger detection by dependency-based word embedding.

    Science.gov (United States)

    Wang, Jian; Zhang, Jianhai; An, Yuan; Lin, Hongfei; Yang, Zhihao; Zhang, Yijia; Sun, Yuanyuan

    2016-08-10

    In biomedical research, events revealing complex relations between entities play an important role. Biomedical event trigger identification has become a research hotspot since its important role in biomedical event extraction. Traditional machine learning methods, such as support vector machines (SVM) and maxent classifiers, which aim to manually design powerful features fed to the classifiers, depend on the understanding of the specific task and cannot generalize to the new domain or new examples. In this paper, we propose an approach which utilizes neural network model based on dependency-based word embedding to automatically learn significant features from raw input for trigger classification. First, we employ Word2vecf, the modified version of Word2vec, to learn word embedding with rich semantic and functional information based on dependency relation tree. Then neural network architecture is used to learn more significant feature representation based on raw dependency-based word embedding. Meanwhile, we dynamically adjust the embedding while training for adapting to the trigger classification task. Finally, softmax classifier labels the examples by specific trigger class using the features learned by the model. The experimental results show that our approach achieves a micro-averaging F1 score of 78.27 and a macro-averaging F1 score of 76.94 % in significant trigger classes, and performs better than baseline methods. In addition, we can achieve the semantic distributed representation of every trigger word.

  9. Supervised machine learning on a network scale: application to seismic event classification and detection

    Science.gov (United States)

    Reynen, Andrew; Audet, Pascal

    2017-09-01

    A new method using a machine learning technique is applied to event classification and detection at seismic networks. This method is applicable to a variety of network sizes and settings. The algorithm makes use of a small catalogue of known observations across the entire network. Two attributes, the polarization and frequency content, are used as input to regression. These attributes are extracted at predicted arrival times for P and S waves using only an approximate velocity model, as attributes are calculated over large time spans. This method of waveform characterization is shown to be able to distinguish between blasts and earthquakes with 99 per cent accuracy using a network of 13 stations located in Southern California. The combination of machine learning with generalized waveform features is further applied to event detection in Oklahoma, United States. The event detection algorithm makes use of a pair of unique seismic phases to locate events, with a precision directly related to the sampling rate of the generalized waveform features. Over a week of data from 30 stations in Oklahoma, United States are used to automatically detect 25 times more events than the catalogue of the local geological survey, with a false detection rate of less than 2 per cent. This method provides a highly confident way of detecting and locating events. Furthermore, a large number of seismic events can be automatically detected with low false alarm, allowing for a larger automatic event catalogue with a high degree of trust.

  10. Contamination event detection using multiple types of conventional water quality sensors in source water.

    Science.gov (United States)

    Liu, Shuming; Che, Han; Smith, Kate; Chen, Lei

    2014-08-01

    Early warning systems are often used to detect deliberate and accidental contamination events in a water system. Conventional methods normally detect a contamination event by comparing the predicted and observed water quality values from one sensor. This paper proposes a new method for event detection by exploring the correlative relationships between multiple types of conventional water quality sensors. The performance of the proposed method was evaluated using data from contaminant injection experiments in a laboratory. Results from these experiments demonstrated the correlative responses of multiple types of sensors. It was observed that the proposed method could detect a contamination event 9 minutes after the introduction of lead nitrate solution with a concentration of 0.01 mg L(-1). The proposed method employs three parameters. Their impact on the detection performance was also analyzed. The initial analysis showed that the correlative response is contaminant-specific, which implies that it can be utilized not only for contamination detection, but also for contaminant identification.

  11. Falls event detection using triaxial accelerometry and barometric pressure measurement.

    Science.gov (United States)

    Bianchi, Federico; Redmond, Stephen J; Narayanan, Michael R; Cerutti, Sergio; Celler, Branko G; Lovell, Nigel H

    2009-01-01

    A falls detection system, employing a Bluetooth-based wearable device, containing a triaxial accelerometer and a barometric pressure sensor, is described. The aim of this study is to evaluate the use of barometric pressure measurement, as a surrogate measure of altitude, to augment previously reported accelerometry-based falls detection algorithms. The accelerometry and barometric pressure signals obtained from the waist-mounted device are analyzed by a signal processing and classification algorithm to discriminate falls from activities of daily living. This falls detection algorithm has been compared to two existing algorithms which utilize accelerometry signals alone. A set of laboratory-based simulated falls, along with other tasks associated with activities of daily living (16 tests) were performed by 15 healthy volunteers (9 male and 6 female; age: 23.7 +/- 2.9 years; height: 1.74 +/- 0.11 m). The algorithm incorporating pressure information detected falls with the highest sensitivity (97.8%) and the highest specificity (96.7%).

  12. ISOMER: Informative Segment Observations for Multimedia Event Recounting

    NARCIS (Netherlands)

    Sun, C.; Burns, B.; Nevatia, R.; Snoek, C.; Bolles, B.; Myers, G.; Wang, W.; Yeh, E.

    2014-01-01

    This paper describes a system for multimedia event detection and recounting. The goal is to detect a high level event class in unconstrained web videos and generate event oriented summarization for display to users. For this purpose, we detect informative segments and collect observations for them,

  13. Study on the Detection of Moving Target in the Mining Method Based on Hybrid Algorithm for Sports Video Analysis

    Directory of Open Access Journals (Sweden)

    Huang Tian

    2014-10-01

    Full Text Available Moving object detection and tracking is the computer vision and image processing is a hot research direction, based on the analysis of the moving target detection and tracking algorithm in common use, focus on the sports video target tracking non rigid body. In sports video, non rigid athletes often have physical deformation in the process of movement, and may be associated with the occurrence of moving target under cover. Media data is surging to fast search and query causes more difficulties in data. However, the majority of users want to be able to quickly from the multimedia data to extract the interested content and implicit knowledge (concepts, rules, rules, models and correlation, retrieval and query quickly to take advantage of them, but also can provide the decision support problem solving hierarchy. Based on the motion in sport video object as the object of study, conducts the system research from the theoretical level and technical framework and so on, from the layer by layer mining between low level motion features to high-level semantic motion video, not only provides support for users to find information quickly, but also can provide decision support for the user to solve the problem.

  14. Spatial-temporal event detection in climate parameter imagery.

    Energy Technology Data Exchange (ETDEWEB)

    McKenna, Sean Andrew; Gutierrez, Karen A.

    2011-10-01

    Previously developed techniques that comprise statistical parametric mapping, with applications focused on human brain imaging, are examined and tested here for new applications in anomaly detection within remotely-sensed imagery. Two approaches to analysis are developed: online, regression-based anomaly detection and conditional differences. These approaches are applied to two example spatial-temporal data sets: data simulated with a Gaussian field deformation approach and weekly NDVI images derived from global satellite coverage. Results indicate that anomalies can be identified in spatial temporal data with the regression-based approach. Additionally, la Nina and el Nino climatic conditions are used as different stimuli applied to the earth and this comparison shows that el Nino conditions lead to significant decreases in NDVI in both the Amazon Basin and in Southern India.

  15. Detection and fine-grained classification of cyberbullying events

    OpenAIRE

    Van Hee, Cynthia; Lefever, Els; Verhoeven, Ben; Mennes, Julie; Desmet, Bart; De Pauw, Guy; Daelemans, Walter; Hoste, Veronique

    2015-01-01

    In the current era of online interactions, both positive and negative experiences are abundant on the Web. As in real life, negative experiences can have a serious impact on youngsters. Recent studies have reported cybervictimization rates among teenagers that vary between 20% and 40%. In this paper, we focus on cyberbullying as a particular form of cybervictimization and explore its automatic detection and fine-grained classification. Data containing cyberbullying was collected from the soci...

  16. Motion-based video monitoring for early detection of livestock diseases: The case of African swine fever.

    Science.gov (United States)

    Fernández-Carrión, Eduardo; Martínez-Avilés, Marta; Ivorra, Benjamin; Martínez-López, Beatriz; Ramos, Ángel Manuel; Sánchez-Vizcaíno, José Manuel

    2017-01-01

    Early detection of infectious diseases can substantially reduce the health and economic impacts on livestock production. Here we describe a system for monitoring animal activity based on video and data processing techniques, in order to detect slowdown and weakening due to infection with African swine fever (ASF), one of the most significant threats to the pig industry. The system classifies and quantifies motion-based animal behaviour and daily activity in video sequences, allowing automated and non-intrusive surveillance in real-time. The aim of this system is to evaluate significant changes in animals' motion after being experimentally infected with ASF virus. Indeed, pig mobility declined progressively and fell significantly below pre-infection levels starting at four days after infection at a confidence level of 95%. Furthermore, daily motion decreased in infected animals by approximately 10% before the detection of the disease by clinical signs. These results show the promise of video processing techniques for real-time early detection of livestock infectious diseases.

  17. Detecting impacts of extreme events with ecological in situ monitoring networks

    Directory of Open Access Journals (Sweden)

    M. D. Mahecha

    2017-09-01

    Full Text Available Extreme hydrometeorological conditions typically impact ecophysiological processes on land. Satellite-based observations of the terrestrial biosphere provide an important reference for detecting and describing the spatiotemporal development of such events. However, in-depth investigations of ecological processes during extreme events require additional in situ observations. The question is whether the density of existing ecological in situ networks is sufficient for analysing the impact of extreme events, and what are expected event detection rates of ecological in situ networks of a given size. To assess these issues, we build a baseline of extreme reductions in the fraction of absorbed photosynthetically active radiation (FAPAR, identified by a new event detection method tailored to identify extremes of regional relevance. We then investigate the event detection success rates of hypothetical networks of varying sizes. Our results show that large extremes can be reliably detected with relatively small networks, but also reveal a linear decay of detection probabilities towards smaller extreme events in log–log space. For instance, networks with  ≈  100 randomly placed sites in Europe yield a  ≥  90 % chance of detecting the eight largest (typically very large extreme events; but only a  ≥  50 % chance of capturing the 39 largest events. These findings are consistent with probability-theoretic considerations, but the slopes of the decay rates deviate due to temporal autocorrelation and the exact implementation of the extreme event detection algorithm. Using the examples of AmeriFlux and NEON, we then investigate to what degree ecological in situ networks can capture extreme events of a given size. Consistent with our theoretical considerations, we find that today's systematically designed networks (i.e. NEON reliably detect the largest extremes, but that the extreme event detection rates are not higher than would

  18. Detecting impacts of extreme events with ecological in situ monitoring networks

    Science.gov (United States)

    Mahecha, Miguel D.; Gans, Fabian; Sippel, Sebastian; Donges, Jonathan F.; Kaminski, Thomas; Metzger, Stefan; Migliavacca, Mirco; Papale, Dario; Rammig, Anja; Zscheischler, Jakob

    2017-09-01

    Extreme hydrometeorological conditions typically impact ecophysiological processes on land. Satellite-based observations of the terrestrial biosphere provide an important reference for detecting and describing the spatiotemporal development of such events. However, in-depth investigations of ecological processes during extreme events require additional in situ observations. The question is whether the density of existing ecological in situ networks is sufficient for analysing the impact of extreme events, and what are expected event detection rates of ecological in situ networks of a given size. To assess these issues, we build a baseline of extreme reductions in the fraction of absorbed photosynthetically active radiation (FAPAR), identified by a new event detection method tailored to identify extremes of regional relevance. We then investigate the event detection success rates of hypothetical networks of varying sizes. Our results show that large extremes can be reliably detected with relatively small networks, but also reveal a linear decay of detection probabilities towards smaller extreme events in log-log space. For instance, networks with ≈ 100 randomly placed sites in Europe yield a ≥ 90 % chance of detecting the eight largest (typically very large) extreme events; but only a ≥ 50 % chance of capturing the 39 largest events. These findings are consistent with probability-theoretic considerations, but the slopes of the decay rates deviate due to temporal autocorrelation and the exact implementation of the extreme event detection algorithm. Using the examples of AmeriFlux and NEON, we then investigate to what degree ecological in situ networks can capture extreme events of a given size. Consistent with our theoretical considerations, we find that today's systematically designed networks (i.e. NEON) reliably detect the largest extremes, but that the extreme event detection rates are not higher than would be achieved by randomly designed networks. Spatio

  19. Full-waveform detection of non-impulsive seismic events based on time-reversal methods

    Science.gov (United States)

    Solano, Ericka Alinne; Hjörleifsdóttir, Vala; Liu, Qinya

    2017-12-01

    We present a full-waveform detection method for non-impulsive seismic events, based on time-reversal principles. We use the strain Green's tensor as a matched filter, correlating it with continuous observed seismograms, to detect non-impulsive seismic events. We show that this is mathematically equivalent to an adjoint method for detecting earthquakes. We define the detection function, a scalar valued function, which depends on the stacked correlations for a group of stations. Event detections are given by the times at which the amplitude of the detection function exceeds a given value relative to the noise level. The method can make use of the whole seismic waveform or any combination of time-windows with different filters. It is expected to have an advantage compared to traditional detection methods for events that do not produce energetic and impulsive P waves, for example glacial events, landslides, volcanic events and transform-fault earthquakes for events which velocity structure along the path is relatively well known. Furthermore, the method has advantages over empirical Greens functions template matching methods, as it does not depend on records from previously detected events, and therefore is not limited to events occurring in similar regions and with similar focal mechanisms as these events. The method is not specific to any particular way of calculating the synthetic seismograms, and therefore complicated structural models can be used. This is particularly beneficial for intermediate size events that are registered on regional networks, for which the effect of lateral structure on the waveforms can be significant. To demonstrate the feasibility of the method, we apply it to two different areas located along the mid-oceanic ridge system west of Mexico where non-impulsive events have been reported. The first study area is between Clipperton and Siqueiros transform faults (9°N), during the time of two earthquake swarms, occurring in March 2012 and May

  20. Face Recognition and Event Detection in Video: An Overview of PROVE-IT Projects

    Science.gov (United States)

    2014-07-01

    in the realm of academic research in the Type 3 environment. 13) Face Recognition to Improve Voice/ Iris Biometrics : Here, the system uses face...recognition as a supplementary biometric to increase confidence on a match made using a different biometric (for example iris , voice, or fingerprints...Voice/ Iris Biometrics + - - 14. Soft biometrics to improve face recognition - 1 Estimated readiness: The e-Gate environment was not evaluated in

  1. High-resolution seismic event detection using local similarity for Large-N arrays.

    Science.gov (United States)

    Li, Zefeng; Peng, Zhigang; Hollis, Dan; Zhu, Lijun; McClellan, James

    2018-01-26

    We develop a novel method for seismic event detection that can be applied to large-N arrays. The method is based on a new detection function named local similarity, which quantifies the signal consistency between the examined station and its nearest neighbors. Using the 5200-station Long Beach nodal array, we demonstrate that stacked local similarity functions can be used to detect seismic events with amplitudes near or below noise levels. We apply the method to one-week continuous data around the 03/11/2011 Mw 9.1 Tohoku-Oki earthquake, to detect local and distant events. In the 5-10 Hz range, we detect various events of natural and anthropogenic origins, but without a clear increase in local seismicity during and following the surface waves of the Tohoku-Oki mainshock. In the 1-Hz low-pass-filtered range, we detect numerous events, likely representing aftershocks from the Tohoku-Oki mainshock region. This high-resolution detection technique can be applied to both ultra-dense and regular array recordings for monitoring ultra-weak micro-seismicity and detecting unusual seismic events in noisy environments.

  2. Occam's approach to video critical behavior detection: a practical real time video in-vehicle alertness monitor.

    Science.gov (United States)

    Steffin, Morris; Wahl, Keith

    2004-01-01

    Driver and pilot fatigue and incapacitation are major causes of injuries and equipment loss. A method is proposed for constant in-vehicle monitoring of alertness, including detection of drowsiness and incapacitation. Novel features of this method include increases in efficiency and specificity that allow real time monitoring in the functional environment by practicable and affordable hardware. The described approach should result in a generally deployable system with acceptable sensitivity and specificity and with capability for operator alarms and automated vehicle intervention to prevent injuries caused by reduced levels of operator performance.

  3. Understanding Behaviors in Videos through Behavior-Specific Dictionaries

    DEFF Research Database (Denmark)

    Ren, Huamin; Liu, Weifeng; Olsen, Søren Ingvor

    2018-01-01

    Understanding behaviors is the core of video content analysis, which is highly related to two important applications: abnormal event detection and action recognition. Dictionary learning, as one of the mid-level representations, is an important step to process a video. It has achieved state...

  4. Understanding behaviors in videos through behavior-specific dictionaries

    DEFF Research Database (Denmark)

    Ren, Huamin; Liu, Weifeng; Olsen, Søren Ingvor

    2018-01-01

    Understanding behaviors is the core of video content analysis, which is highly related to two important applications: abnormal event detection and action recognition. Dictionary learning, as one of the mid-level representations, is an important step to process a video. It has achieved state...

  5. High-resolution bolometers for rare events detection

    Energy Technology Data Exchange (ETDEWEB)

    Vanzini, M. E-mail: marco.vanzini@mi.infn.it; Alessandrello, A.; Brofferio, C.; Bucci, C.; Coccia, E.; Cremonesi, O.; Fafone, V.; Fiorini, E.; Giuliani, A.; Nucciotti, A.; Pavan, M.; Peruzzi, A.; Pessina, G.; Pirro, S.; Pobes, C.; Parmeggiano, S.; Perego, M.; Previtali, E.; Rotilio, A.; Zanotti, L

    2001-04-01

    Since many years the Milano-Gran Sasso collaboration is developing large mass calorimeters for Double Beta Decay and Dark Matter searches, employing TeO{sub 2} crystals as absorber elements. Recently, we have focused our attention on the improvement of the detector resolution: an efficient dumping suspension and the implementation of a new cold electronics device, have strongly suppressed the main sources of noise. The increase in S/N ratio has been of almost an order of magnitude and the resolution achieved is competitive with that of Ge diodes for {gamma}-rays detection, while a FWHM of 3.2{+-}0.3 keV has been obtained for 5.4 MeV alpha particles, the best result with any kind of detector.

  6. High-resolution bolometers for rare events detection

    Science.gov (United States)

    Vanzini, M.; Alessandrello, A.; Brofferio, C.; Bucci, C.; Coccia, E.; Cremonesi, O.; Fafone, V.; Fiorini, E.; Giuliani, A.; Nucciotti, A.; Pavan, M.; Peruzzi, A.; Pessina, G.; Pirro, S.; Pobes, C.; Parmeggiano, S.; Perego, M.; Previtali, E.; Rotilio, A.; Zanotti, L.

    2001-04-01

    Since many years the Milano-Gran Sasso collaboration is developing large mass calorimeters for Double Beta Decay and Dark Matter searches, employing TeO 2 crystals as absorber elements. Recently, we have focused our attention on the improvement of the detector resolution: an efficient dumping suspension and the implementation of a new cold electronics device, have strongly suppressed the main sources of noise. The increase in S/ N ratio has been of almost an order of magnitude and the resolution achieved is competitive with that of Ge diodes for γ-rays detection, while a FWHM of 3.2±0.3 keV has been obtained for 5.4 MeV alpha particles, the best result with any kind of detector.

  7. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction

    National Research Council Canada - National Science Library

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    .... Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer...

  8. Video monitoring of visible atmospheric emissions: from a manual device to a new fully automatic detection and classification device; Video surveillance des rejets atmospheriques d'un site siderurgique: d'un systeme manuel a la detection automatique

    Energy Technology Data Exchange (ETDEWEB)

    Bardet, I.; Ryckelynck, F.; Desmonts, T. [Sollac, 59 - Dunkerque (France)

    1999-11-01

    Complete text of publication follows: the context of strong local sensitivity to dust emissions from an integrated steel plant justifies the monitoring of the emissions of abnormally coloured smokes from this plant. In a first step, the watch is done 'visually' by screening and counting the puff emissions through a set of seven cameras and video recorders. The development of a new device making automatic picture analysis allowed to render the inspection automatic. The new system detects and counts the incidents and sends an alarm to the process operator. This way for automatic detection can be extended, after some tests, to other uses in the environmental field. (authors)

  9. Detection of isolated covert saccades with the video head impulse test in peripheral vestibular disorders.

    Science.gov (United States)

    Blödow, Alexander; Pannasch, Sebastian; Walther, Leif Erik

    2013-08-01

    The function of the semicircular canal receptors and the pathway of the vestibulo-ocular-reflex (VOR) can be diagnosed with the clinical head impulse test (cHIT). Recently, the video head impulse test (vHIT) has been introduced but so far there is little clinical experience with the vHIT in patients with peripheral vestibular disorders. The aim of the study was to investigate the horizontal VOR (hVOR) by means of vHIT in peripheral vestibular disorders. Using the vHIT, we examined the hVOR in a group of 117 patients and a control group of 20 healthy subjects. The group of patients included vestibular neuritis (VN) (n=52), vestibular schwannoma (VS) (n=31), Ménière's disease (MD) (n=22) and bilateral vestibulopathy (BV) (n=12). Normal hVOR gain was at 0.96 ± 0.08, while abnormal hVOR gain was at 0.44 ± 0.20 (79.1% of all cases). An abnormal vHIT was found in VN (94.2%), VS (61.3%), MD (54.5%) and BV (91.7%). Three conditions of refixation saccades occurred frequently in cases with abnormal hVOR: isolated covert saccades (13.7%), isolated overt saccades (34.3%) and the combination of overt and covert saccades (52.0%). The vHIT detects abnormal hVOR changes in the combination of gain assessment and refixation saccades. Since isolated covert saccades in hVOR changes can only be seen with vHIT, peripheral vestibular disorders are likely to be diagnosed incorrectly with the cHIT to a certain amount. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. A New Mining Method to Detect Real Time Substance Use Events from Wearable Biosensor Data Stream.

    Science.gov (United States)

    Wang, Jin; Fang, Hua; Carreiro, Stephanie; Wang, Honggang; Boyer, Edward

    2017-01-01

    Detecting real time substance use is a critical step for optimizing behavioral interventions to prevent drug abuse. Traditional methods based on self-reporting or urine screening are inefficient or intrusive for drug use detection, and inappropriate for timely interventions. For example, self-report suffers from distortion or recall bias; while urine screening often detects drug use that occurred only within the previous 72 hours. Methods for real-time substance use detection are severely underdeveloped, partly due to the novelty of wearable biosensor technique and the lack of substantive clinical data for evaluation. We propose a new real-time drug use event detection method using data obtained from wearable biosensors. Specifically, this method is built upon the slide window technique to process the data stream, and a distance-based outlier detection method to identify substance use events. This novel method is designed to examine how to detect and set up the thresholds of parameters in real-time drug use event detection for wearable biosensor data streams. Our numerical analyses empirically identified the thresholds of parameters used to detect the cocaine use and showed that this proposed method could be adapted to detect other substance use events.

  11. Systematic detection of seismic events at Mount St. Helens with an ultra-dense array

    Science.gov (United States)

    Meng, X.; Hartog, J. R.; Schmandt, B.; Hotovec-Ellis, A. J.; Hansen, S. M.; Vidale, J. E.; Vanderplas, J.

    2016-12-01

    During the summer of 2014, an ultra-dense array of 900 geophones was deployed around the crater of Mount St. Helens and continuously operated for 15 days. This dataset provides us an unprecedented opportunity to systematically detect seismic events around an active volcano and study their underlying mechanisms. We use a waveform-based matched filter technique to detect seismic events from this dataset. Due to the large volume of continuous data ( 1 TB), we performed the detection on the GPU cluster Stampede (https://www.tacc.utexas.edu/systems/stampede). We build a suite of template events from three catalogs: 1) the standard Pacific Northwest Seismic Network (PNSN) catalog (45 events); 2) the catalog from Hansen&Schmandt (2015) obtained with a reverse-time imaging method (212 events); and 3) the catalog identified with a matched filter technique using the PNSN permanent stations (190 events). By searching for template matches in the ultra-dense array, we find 2237 events. We then calibrate precise relative magnitudes for template and detected events, using a principal component fit to measure waveform amplitude ratios. The magnitude of completeness and b-value of the detected catalog is -0.5 and 1.1, respectively. Our detected catalog shows several intensive swarms, which are likely driven by fluid pressure transients in conduits or slip transients on faults underneath the volcano. We are currently relocating the detected catalog with HypoDD and measuring the seismic velocity changes at Mount St. Helens using the coda wave interferometry of detected repeating earthquakes. The accurate temporal-spatial migration pattern of seismicity and seismic property changes should shed light on the physical processes beneath Mount St. Helens.

  12. Real-Time Gait Event Detection Based on Kinematic Data Coupled to a Biomechanical Model ?

    OpenAIRE

    Lambrecht, Stefan; Harutyunyan, Anna; Tanghe, Kevin; Afschrift, Maarten; De Schutter, Joris; Jonkers, Ilse

    2017-01-01

    Real-time detection of multiple stance events, more specifically initial contact (IC), foot flat (FF), heel off (HO), and toe off (TO), could greatly benefit neurorobotic (NR) and neuroprosthetic (NP) control. Three real-time threshold-based algorithms have been developed, detecting the aforementioned events based on kinematic data in combination with a biomechanical model. Data from seven subjects walking at three speeds on an instrumented treadmill were used to validate the presented algori...

  13. Building a Test Collection for Significant-Event Detection in Arabic Tweets

    OpenAIRE

    Almerekhi, Hind Ali

    2016-01-01

    With the increasing popularity of microblogging services like Twitter, researchers discov- ered a rich medium for tackling real-life problems like event detection. However, event detection in Twitter is often obstructed by the lack of public evaluation mechanisms such as test collections (set of tweets, labels, and queries to measure the eectiveness of an information retrieval system). The problem is more evident when non-English lan- guages, e.g., Arabic, are concerned. With t...

  14. The MediaMill TRECVID 2012 semantic video search engine

    NARCIS (Netherlands)

    Snoek, C.G.M.; van de Sande, K.E.A.; Habibian, A.; Kordumova, S.; Li, Z.; Mazloom, M.; Pintea, S.L.; Tao, R.; Koelma, D.C.; Smeulders, A.W.M.

    2012-01-01

    In this paper we describe our TRECVID 2012 video retrieval experiments. The MediaMill team participated in four tasks: semantic indexing, multimedia event detection, multimedia event recounting and instance search. The starting point for the MediaMill detection approach is our top-performing

  15. An integrated logit model for contamination event detection in water distribution systems.

    Science.gov (United States)

    Housh, Mashor; Ostfeld, Avi

    2015-05-15

    The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Event-specific qualitative and quantitative detection of five genetically modified rice events using a single standard reference molecule.

    Science.gov (United States)

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Shin, Min-Ki; Moon, Gui-Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2017-07-01

    One novel standard reference plasmid, namely pUC-RICE5, was constructed as a positive control and calibrator for event-specific qualitative and quantitative detection of genetically modified (GM) rice (Bt63, Kemingdao1, Kefeng6, Kefeng8, and LLRice62). pUC-RICE5 contained fragments of a rice-specific endogenous reference gene (sucrose phosphate synthase) as well as the five GM rice events. An existing qualitative PCR assay approach was modified using pUC-RICE5 to create a quantitative method with limits of detection correlating to approximately 1-10 copies of rice haploid genomes. In this quantitative PCR assay, the square regression coefficients ranged from 0.993 to 1.000. The standard deviation and relative standard deviation values for repeatability ranged from 0.02 to 0.22 and 0.10% to 0.67%, respectively. The Ministry of Food and Drug Safety (Korea) validated the method and the results suggest it could be used routinely to identify five GM rice events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Potential of video cameras in assessing event and seasonal coastline behaviour: Grand Popo, Benin (Gulf of Guinea)

    NARCIS (Netherlands)

    Abessolo Ondoa, G.; Almar, R.; Kestenare, E.; Bahini, A.; Houngue, G-H.; Jouanno, J; Du Penhoat, Y.; Castelle, B.; Melet, A.; Meyssignac, B.; Anthony, E.J.; Laibi, R.; Alory, G.; Ranasinghe, Ranasinghe W M R J B

    2016-01-01

    In this study, we explore the potential of a nearshore video system to obtain a long-term estimation of coastal variables (shoreline, beach slope, sea level elevation and wave forcing) at Grand Popo beach, Benin, West Africa, from March 2013 to February 2015. We first present a validation of the

  18. Surgical tool detection in cataract surgery videos through multi-image fusion inside a convolutional neural network.

    Science.gov (United States)

    Al Hajj, Hassan; Lamard, Mathieu; Charriere, Katia; Cochener, Beatrice; Quellec, Gwenole

    2017-07-01

    The automatic detection of surgical tools in surgery videos is a promising solution for surgical workflow analysis. It paves the way to various applications, including surgical workflow optimization, surgical skill evaluation and real-time warning generation. A solution based on convolutional neural networks (CNNs) is proposed in this paper. Unlike existing solutions, the proposed CNN does not analyze images independently. it analyzes sequences of consecutive images. Features extracted from each image by the CNN are fused inside the network using the optical flow. For improved performance, this multi-image fusion strategy is also applied while training the CNN. The proposed framework was evaluated in a dataset of 30 cataract surgery videos (6 hours of videos). Ten tool categories were defined by surgeons. The proposed system was able to detect each of these categories with a high area under the ROC curve (0.953 ≤ Az ≤ 0.987). The proposed detector, based on multi-image fusion, was significantly more sensitive and specific than a similar system analyzing images independently (p = 2.98 × 10(-6) and p = 2.07 × 10(-3), respectively).

  19. Real-time detection and classification of anomalous events in streaming data

    Science.gov (United States)

    Ferragut, Erik M.; Goodall, John R.; Iannacone, Michael D.; Laska, Jason A.; Harrison, Lane T.

    2016-04-19

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The events can be displayed to a user in user-defined groupings in an animated fashion. The system can include a plurality of anomaly detectors that together implement an algorithm to identify low probability events and detect atypical traffic patterns. The atypical traffic patterns can then be classified as being of interest or not. In one particular example, in a network environment, the classification can be whether the network traffic is malicious or not.

  20. A robust neural network-based approach for microseismic event detection

    KAUST Repository

    Akram, Jubran

    2017-08-17

    We present an artificial neural network based approach for robust event detection from low S/N waveforms. We use a feed-forward network with a single hidden layer that is tuned on a training dataset and later applied on the entire example dataset for event detection. The input features used include the average of absolute amplitudes, variance, energy-ratio and polarization rectilinearity. These features are calculated in a moving-window of same length for the entire waveform. The output is set as a user-specified relative probability curve, which provides a robust way of distinguishing between weak and strong events. An optimal network is selected by studying the weight-based saliency and effect of number of neurons on the predicted results. Using synthetic data examples, we demonstrate that this approach is effective in detecting weaker events and reduces the number of false positives.

  1. An Efficient Pattern Mining Approach for Event Detection in Multivariate Temporal Data

    Science.gov (United States)

    Batal, Iyad; Cooper, Gregory; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos

    2015-01-01

    This work proposes a pattern mining approach to learn event detection models from complex multivariate temporal data, such as electronic health records. We present Recent Temporal Pattern mining, a novel approach for efficiently finding predictive patterns for event detection problems. This approach first converts the time series data into time-interval sequences of temporal abstractions. It then constructs more complex time-interval patterns backward in time using temporal operators. We also present the Minimal Predictive Recent Temporal Patterns framework for selecting a small set of predictive and non-spurious patterns. We apply our methods for predicting adverse medical events in real-world clinical data. The results demonstrate the benefits of our methods in learning accurate event detection models, which is a key step for developing intelligent patient monitoring and decision support systems. PMID:26752800

  2. Mining the IPTV Channel Change Event Stream to Discover Insight and Detect Ads

    Directory of Open Access Journals (Sweden)

    Matej Kren

    2016-01-01

    Full Text Available IPTV has been widely deployed throughout the world, bringing significant advantages to users in terms of the channel offering, video on demand, and interactive applications. One aspect that has been often neglected is the ability of precise and unobtrusive telemetry. TV set-top boxes that are deployed in modern IPTV systems can be thought of as capable sensor nodes that collect vast amounts of data, representing both the user activity and the quality of service delivered by the system itself. In this paper we focus on the user-generated events and analyze how the data stream of channel change events received from the entire IPTV network can be mined to obtain insight about the content. We demonstrate that it is possible to predict the occurrence of TV ads with high probability and show that the approach could be extended to model the user behavior and classify the viewership in multiple dimensions.

  3. Validity of the clinical and administrative databases in detecting post-operative adverse events.

    Science.gov (United States)

    Rodrigo-Rincon, Isabel; Martin-Vizcaino, Marta P; Tirapu-Leon, Belen; Zabalza-Lopez, Pedro; Abad-Vicente, Francisco J; Merino-Peralta, Asuncion

    2015-08-01

    Patient safety has become a major public health concern and a priority for multiple institutions. Assessment of the adverse events is a key element for measuring the quality of healthcare organizations. The aim of this study was to measure the validity of the clinical and administrative database (CADB) as a source of information for the detection of post-operative adverse events. The study design was cross-sectional. The study was carried out at the Hospital de Navarra (north of Spain). The sample consisted of 1602 episodes of surgical hospitalization from nine surgical departments. Two sources of information were used: data extracted from the complete clinical record (CR), the gold standard, vs. the CADB. Rate of adverse events, sensitivity, positive predictive value and κ index were analysed for 28 types of post-operative adverse event. Each index was considered acceptable if it had a value >0.6. The rate of adverse events using the CADB was 12.5 vs. 24% using CR within 30 days of surgery (P = 0.0001) and 13.9% using CR during a hospital stay (P > 0.05). The overall sensitivity of the CADB in the detection of adverse events was 0.18, and the positive predictive value was 0.34. Two adverse events (accounted for 6% of the total events detected) had moderate validity and the rest poor validity. Forty-two per cent of the adverse events took place after patient discharge. Although the use of CADB is appealing, the present study suggests that it is of very limited value in the detection of adverse events post-operatively. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  4. Using machine learning to detect events in eye-tracking data.

    Science.gov (United States)

    Zemblys, Raimondas; Niehorster, Diederick C; Komogortsev, Oleg; Holmqvist, Kenneth

    2017-02-23

    Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.

  5. Detection of vulnerable relays and sensitive controllers under cascading events based on performance indices

    DEFF Research Database (Denmark)

    Liu, Zhou; Chen, Zhe; Hu, Yanting

    2014-01-01

    ) based detection strategy is proposed to identify the vulnerable relays and sensitive controllers under the overloading situation during cascading events. Based on the impedance margin sensitivity, diverse performance indices are proposed to help improving this detection. A study case of voltage...... instability induced cascaded blackout built in real time digital simulator (RTDS) will be used to demonstrate the proposed strategy. The simulation results indicate this strategy can effectively detect the vulnerable relays and sensitive controllers under overloading situations....

  6. Detection of Double-Compressed H.264/AVC Video Incorporating the Features of the String of Data Bits and Skip Macroblocks

    Directory of Open Access Journals (Sweden)

    Heng Yao

    2017-12-01

    Full Text Available Today’s H.264/AVC coded videos have a high quality, high data-compression ratio. They also have a strong fault tolerance, better network adaptability, and have been widely applied on the Internet. With the popularity of powerful and easy-to-use video editing software, digital videos can be tampered with in various ways. Therefore, the double compression in the H.264/AVC video can be used as a first step in the study of video-tampering forensics. This paper proposes a simple, but effective, double-compression detection method that analyzes the periodic features of the string of data bits (SODBs and the skip macroblocks (S-MBs for all I-frames and P-frames in a double-compressed H.264/AVC video. For a given suspicious video, the SODBs and S-MBs are extracted for each frame. Both features are then incorporated to generate one enhanced feature to represent the periodic artifact of the double-compressed video. Finally, a time-domain analysis is conducted to detect the periodicity of the features. The primary Group of Pictures (GOP size is estimated based on an exhaustive strategy. The experimental results demonstrate the efficacy of the proposed method.

  7. RNAEditor: easy detection of RNA editing events and the introduction of editing islands.

    Science.gov (United States)

    John, David; Weirick, Tyler; Dimmeler, Stefanie; Uchida, Shizuka

    2017-11-01

    RNA editing of adenosine residues to inosine ('A-to-I editing') is the most common RNA modification event detectible with RNA sequencing (RNA-seq). While not directly detectable, inosine is read by next-generation sequencers as guanine. Therefore, mapping RNA-seq reads to their corresponding reference genome can detect potential editing events by identifying 'A-to-G' conversions. However, one must exercise caution when searching for editing sites, as A-to-G conversions also arise from sequencing errors as well as mutations. To address these complexities, several algorithms and software products have been developed to accurately identify editing events. Here, we survey currently available methods to analyze RNA editing events and introduce a new easy-to-use bioinformatics tool 'RNAEditor' for the detection of RNA editing events. During the development of RNAEditor, we noticed editing often happened in clusters, which we named 'editing islands'. We developed a clustering algorithm to find editing islands and included it in RNAEditor. RNAEditor is freely available at http://rnaeditor.uni-frankfurt.de. We anticipate that RNAEditor will provide biologists with an easy-to-use tool for studying RNA editing events and the newly defined editing islands. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Contamination Event Detection with Multivariate Time-Series Data in Agricultural Water Monitoring

    Directory of Open Access Journals (Sweden)

    Yingchi Mao

    2017-12-01

    Full Text Available Time series data of multiple water quality parameters are obtained from the water sensor networks deployed in the agricultural water supply network. The accurate and efficient detection and warning of contamination events to prevent pollution from spreading is one of the most important issues when pollution occurs. In order to comprehensively reduce the event detection deviation, a spatial–temporal-based event detection approach with multivariate time-series data for water quality monitoring (M-STED was proposed. The M-STED approach includes three parts. The first part is that M-STED adopts a Rule K algorithm to select backbone nodes as the nodes in the CDS, and forward the sensed data of multiple water parameters. The second part is to determine the state of each backbone node with back propagation neural network models and the sequential Bayesian analysis in the current timestamp. The third part is to establish a spatial model with Bayesian networks to estimate the state of the backbones in the next timestamp and trace the “outlier” node to its neighborhoods to detect a contamination event. The experimental results indicate that the average detection rate is more than 80% with M-STED and the false detection rate is lower than 9%, respectively. The M-STED approach can improve the rate of detection by about 40% and reduce the false alarm rate by about 45%, compared with the event detection with a single water parameter algorithm, S-STED. Moreover, the proposed M-STED can exhibit better performance in terms of detection delay and scalability.

  9. Assessing the probability of detection of horizontal gene transfer events in bacterial populations

    Directory of Open Access Journals (Sweden)

    Jeffrey P. Townsend

    2012-02-01

    Full Text Available Experimental approaches to identify horizontal gene transfer (HGT events of non-mobile DNA in bacteria have typically relied on detection of the initial transformants or their immediate offspring. However, rare HGT events occurring in large and structured populations are unlikely to be detected in a short time frame. Population genetic modelling of the growth dynamics of bacterial genotypes is therefore necessary to account for natural selection and genetic drift during the time lag and to predict realistic time frames for detection with a given sampling design. Here we draw on statistical approaches to population genetic theory to construct a cohesive probabilistic framework for investigation of HGT of exogenous DNA into bacteria. In particular, the stochastic timing of rare HGT events is accounted for. Integrating over all possible event timings, we provide an equation for the probability of detection, given that HGT actually occurred. Furthermore, we identify the key variables determining the probability of detecting HGT events in four different case scenarios that are representative of bacterial populations in various environments. Our theoretical analysis provides insight into the temporal aspects of dissemination of genetic material, such as antibiotic resistance genes or transgenes present in GMOs. Due to the long time scales involved and the exponential growth of bacteria with differing fitness, quantitative analyses incorporating bacterial generation time and levels of selection, such as the one presented here, will be a necessary component of any future experimental design and analysis of HGT as it occurs in natural settings.

  10. A novel method to precisely detect apnea and hypopnea events by airflow and oximetry signals.

    Science.gov (United States)

    Huang, Wu; Guo, Bing; Shen, Yan; Tang, Xiangdong

    2017-09-01

    Sleep apnea hypopnea syndrome (SAHS) affects people's quality of life. The apnea hypopnea index (AHI) is the key indicator for diagnosing SAHS. The determination of the AHI is based on accurate detection of apnea and hypopnea events. This paper provides a novel method to detect apnea and hypopnea events based on the respiratory nasal airflow signal and the oximetry signal. The method uses sliding window and short time slice methods to eliminate systematic and sporadic noise of the airflow signal for improving the detection precision. Using this algorithm, the sleep data of 30 subjects from the Huaxi Sleep Center of Sichuan University (HSCSU) and the Teaching Hospital of Chengdu University of Traditional Chinese Medicine (THCUTCM) were auto-analyzed for detecting the apnea and hypopnea events. The total predicted apnea and hypopnea events were 8470. By manual investigation, the sensitivity and positive predictive value (PPV) of detecting apnea and hypopnea events were 97.6% and 95.7%, respectively. The sleep data of 28 subjects form HSCSU were auto-diagnosed SAHS according to the AHI. The sensitivity and PPV were 92.3% and 92.3%, respectively. This is an effective and precise method to diagnose SAHS. It can fit the home care SAHS screener. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. On Event/Time Triggered and Distributed Analysis of a WSN System for Event Detection, Using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Sofia Maria Dima

    2016-01-01

    Full Text Available Event detection in realistic WSN environments is a critical research domain, while the environmental monitoring comprises one of its most pronounced applications. Although efforts related to the environmental applications have been presented in the current literature, there is a significant lack of investigation on the performance of such systems, when applied in wireless environments. Aiming at addressing this shortage, in this paper an advanced multimodal approach is followed based on fuzzy logic. The proposed fuzzy inference system (FIS is implemented on TelosB motes and evaluates the probability of fire detection while aiming towards power conservation. Additionally to a straightforward centralized approach, a distributed implementation of the above FIS is also proposed, aiming towards network congestion reduction while optimally distributing the energy consumption among network nodes so as to maximize network lifetime. Moreover this work proposes an event based execution of the aforementioned FIS aiming to further reduce the computational as well as the communication cost, compared to a periodical time triggered FIS execution. As a final contribution, performance metrics acquired from all the proposed FIS implementation techniques are thoroughly compared and analyzed with respect to critical network conditions aiming to offer realistic evaluation and thus objective conclusions’ extraction.

  12. Infrasound Analysis: Reduction of Missed Events and Detection of Simultaneous Signals.

    Science.gov (United States)

    Averbuch, G.; Assink, J. D.; Smets, P. S. M.; Evers, L. G.

    2016-12-01

    Automatic detection of infrasound signals, e.g. using microbarometer arrays of the International Monitoring System (IMS) from the Comprehensive Nuclear-Test-Ban Treaty (CTBT), requires low rates of both false alarms and missed events. In this presentation, we focus on the detection of simultaneous, low signal-to-noise ratio (SNR) infrasound signals. Simultaneous signals may mask each other and may cause low SNR values. We introduce a new method based on the Fisher detector and the Hough transform which allows reducing the number of missed events by 1) detecting low SNR signals and 2) detecting simultaneous signals. The new method is applied on multiple years of infrasound data from I18DK (Greenland) and shows an increase of 20% in the number of detections.

  13. Multiplexed polarization OTDR system with high DOP and ability of multi-event detection.

    Science.gov (United States)

    Wang, Xuefeng; Wang, Chaodong; Tang, Ming; Fu, Songnian; Shum, Perry

    2017-05-01

    A novel polarization optical time domain reflectometry (POTDR) with high degree of polarization is proposed for multi-event detection. By employing multiple 2×2 optical fiber couplers and fiber mirrors, an arbitrary number and customized length of sensing fiber can be multiplexed into the system without modification of the other components, e.g., the light source, photodetector, signal processing device, etc. More importantly, the signal-to-noise ratio of this system is significantly improved, and the temporal depolarization effect can be almost completely suppressed. Additionally, the system response time is considerably reduced by dispensing with data averaging, so that intrusion events such as touching and moving fiber can be detected instantaneously and precisely located. Experiments have been conducted that proved the capability of multi-event simultaneous detection and vibration frequency measurement. This system promises application potential in multi-zone perimeter security and physical field measurement.

  14. [The Questionnaire of Experiences Associated with Video games (CERV): an instrument to detect the problematic use of video games in Spanish adolescents].

    Science.gov (United States)

    Chamarro, Andres; Carbonell, Xavier; Manresa, Josep Maria; Munoz-Miralles, Raquel; Ortega-Gonzalez, Raquel; Lopez-Morron, M Rosa; Batalla-Martinez, Carme; Toran-Monserrat, Pere

    2014-01-01

    The aim of this study is to validate the Video Game-Related Experiences Questionnaire (CERV in Spanish). The questionnaire consists of 17 items, developed from the CERI (Internet-Related Experiences Questionnaire - Beranuy and cols.), and assesses the problematic use of non-massive video games. It was validated for adolescents in Compulsory Secondary Education. To validate the questionnaire, a confirmatory factor analysis (CFA) and an internal consistency analysis were carried out. The factor structure shows two factors: (a) Psychological dependence and use for evasion; and (b) Negative consequences of using video games. Two cut-off points were established for people with no problems in their use of video games (NP), with potential problems in their use of video games (PP), and with serious problems in their use of video games (SP). Results show that there is higher prevalence among males and that problematic use decreases with age. The CERV seems to be a good instrument for the screening of adolescents with difficulties deriving from video game use. Further research should relate problematic video game use with difficulties in other life domains, such as the academic field.

  15. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview ... group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork Peer Support Program ...

  16. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  17. Discrete Event Simulation Model of the Polaris 2.1 Gamma Ray Imaging Radiation Detection Device

    Science.gov (United States)

    2016-06-01

    Polaris given that the Polaris continues to make improvements not just in GPS and WIFI capabilities but in capture rates for different radiation ...release; distribution is unlimited DISCRETE EVENT SIMULATION MODEL OF THE POLARIS 2.1 GAMMA RAY IMAGING RADIATION DETECTION DEVICE by Andres T...OF THE POLARIS 2.1 GAMMA RAY IMAGING RADIATION DETECTION DEVICE 5. FUNDING NUMBERS 6. AUTHOR(S) Andres T. Juarez III 7. PERFORMING ORGANIZATION

  18. Events

    Directory of Open Access Journals (Sweden)

    Igor V. Karyakin

    2016-02-01

    Full Text Available The 9th ARRCN Symposium 2015 was held during 21st–25th October 2015 at the Novotel Hotel, Chumphon, Thailand, one of the most favored travel destinations in Asia. The 10th ARRCN Symposium 2017 will be held during October 2017 in the Davao, Philippines. International Symposium on the Montagu's Harrier (Circus pygargus «The Montagu's Harrier in Europe. Status. Threats. Protection», organized by the environmental organization «Landesbund für Vogelschutz in Bayern e.V.» (LBV was held on November 20-22, 2015 in Germany. The location of this event was the city of Wurzburg in Bavaria.

  19. Secure Access Control and Large Scale Robust Representation for Online Multimedia Event Detection

    Science.gov (United States)

    Liu, Changyu; Li, Huiling

    2014-01-01

    We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art approaches. PMID:25147840

  20. Real-Time Event Detection for Monitoring Natural and Source Waterways - Sacramento, CA

    Science.gov (United States)

    The use of event detection systems in finished drinking water systems is increasing in order to monitor water quality in both operational and security contexts. Recent incidents involving harmful algal blooms and chemical spills into watersheds have increased interest in monitori...

  1. Use of wireless sensor networks for distributed event detection in disaster management applications

    NARCIS (Netherlands)

    Bahrepour, M.; Meratnia, Nirvana; Poel, Mannes; Taghikhaki, Zahra; Havinga, Paul J.M.

    Recently, wireless sensor networks (WSNs) have become mature enough to go beyond being simple fine-grained continuous monitoring platforms and have become one of the enabling technologies for early-warning disaster systems. Event detection functionality of WSNs can be of great help and importance

  2. Detecting erosion events in earth dam and levee passive seismic data with clustering

    NARCIS (Netherlands)

    Belcher, W.; Camp, T.; Krzhizhanovskaya, V.V.

    2015-01-01

    Geophysical sensor technologies can be used to understand the structural integrity of Earth Dams and Levees (EDLs). We are part of an interdisciplinary team researching techniques for the advancement of EDL health monitoring and the automatic detection of internal erosion events. We present results

  3. Real-Time Gait Event Detection Based on Kinematic Data Coupled to a Biomechanical Model.

    Science.gov (United States)

    Lambrecht, Stefan; Harutyunyan, Anna; Tanghe, Kevin; Afschrift, Maarten; De Schutter, Joris; Jonkers, Ilse

    2017-03-24

    Real-time detection of multiple stance events, more specifically initial contact (IC), foot flat (FF), heel off (HO), and toe off (TO), could greatly benefit neurorobotic (NR) and neuroprosthetic (NP) control. Three real-time threshold-based algorithms have been developed, detecting the aforementioned events based on kinematic data in combination with a biomechanical model. Data from seven subjects walking at three speeds on an instrumented treadmill were used to validate the presented algorithms, accumulating to a total of 558 steps. The reference for the gait events was obtained using marker and force plate data. All algorithms had excellent precision and no false positives were observed. Timing delays of the presented algorithms were similar to current state-of-the-art algorithms for the detection of IC and TO, whereas smaller delays were achieved for the detection of FF. Our results indicate that, based on their high precision and low delays, these algorithms can be used for the control of an NR/NP, with the exception of the HO event. Kinematic data is used in most NR/NP control schemes and is thus available at no additional cost, resulting in a minimal computational burden. The presented methods can also be applied for screening pathological gait or gait analysis in general in/outside of the laboratory.

  4. Video Analytics for Business Intelligence

    CERN Document Server

    Porikli, Fatih; Xiang, Tao; Gong, Shaogang

    2012-01-01

    Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the sho...

  5. Detection and identification of multiple genetically modified events using DNA insert fingerprinting.

    Science.gov (United States)

    Raymond, Philippe; Gendron, Louis; Khalf, Moustafa; Paul, Sylvianne; Dibley, Kim L; Bhat, Somanath; Xie, Vicki R D; Partis, Lina; Moreau, Marie-Eve; Dollard, Cheryl; Coté, Marie-José; Laberge, Serge; Emslie, Kerry R

    2010-03-01

    Current screening and event-specific polymerase chain reaction (PCR) assays for the detection and identification of genetically modified organisms (GMOs) in samples of unknown composition or for the detection of non-regulated GMOs have limitations, and alternative approaches are required. A transgenic DNA fingerprinting methodology using restriction enzyme digestion, adaptor ligation, and nested PCR was developed where individual GMOs are distinguished by the characteristic fingerprint pattern of the fragments generated. The inter-laboratory reproducibility of the amplified fragment sizes using different capillary electrophoresis platforms was compared, and reproducible patterns were obtained with an average difference in fragment size of 2.4 bp. DNA insert fingerprints for 12 different maize events, including two maize hybrids and one soy event, were generated that reflected the composition of the transgenic DNA constructs. Once produced, the fingerprint profiles were added to a database which can be readily exchanged and shared between laboratories. This approach should facilitate the process of GMO identification and characterization.

  6. A novel seizure detection algorithm informed by hidden Markov model event states

    Science.gov (United States)

    Baldassano, Steven; Wulsin, Drausin; Ung, Hoameng; Blevins, Tyler; Brown, Mesha-Gay; Fox, Emily; Litt, Brian

    2016-06-01

    Objective. Recently the FDA approved the first responsive, closed-loop intracranial device to treat epilepsy. Because these devices must respond within seconds of seizure onset and not miss events, they are tuned to have high sensitivity, leading to frequent false positive stimulations and decreased battery life. In this work, we propose a more robust seizure detection model. Approach. We use a Bayesian nonparametric Markov switching process to parse intracranial EEG (iEEG) data into distinct dynamic event states. Each event state is then modeled as a multidimensional Gaussian distribution to allow for predictive state assignment. By detecting event states highly specific for seizure onset zones, the method can identify precise regions of iEEG data associated with the transition to seizure activity, reducing false positive detections associated with interictal bursts. The seizure detection algorithm was translated to a real-time application and validated in a small pilot study using 391 days of continuous iEEG data from two dogs with naturally occurring, multifocal epilepsy. A feature-based seizure detector modeled after the NeuroPace RNS System was developed as a control. Main results. Our novel seizure detection method demonstrated an improvement in false negative rate (0/55 seizures missed versus 2/55 seizures missed) as well as a significantly reduced false positive rate (0.0012 h versus 0.058 h-1). All seizures were detected an average of 12.1 ± 6.9 s before the onset of unequivocal epileptic activity (unequivocal epileptic onset (UEO)). Significance. This algorithm represents a computationally inexpensive, individualized, real-time detection method suitable for implantable antiepileptic devices that may considerably reduce false positive rate relative to current industry standards.

  7. Event detection and exception handling strategies in the ASDEX Upgrade discharge control system

    Energy Technology Data Exchange (ETDEWEB)

    Treutterer, W., E-mail: Wolfgang.Treutterer@ipp.mpg.de; Neu, G.; Rapson, C.; Raupp, G.; Zasche, D.; Zehetbauer, T.

    2013-10-15

    Highlights: •Event detection and exception handling is integrated in control system architecture. •Pulse control with local exception handling and pulse supervision with central exception handling are strictly separated. •Local exception handling limits the effect of an exception to a minimal part of the controlled system. •Central Exception Handling solves problems requiring coordinated action of multiple control components. -- Abstract: Thermonuclear plasmas are governed by nonlinear characteristics: plasma operation can be classified into scenarios with pronounced features like L and H-mode, ELMs or MHD activity. Transitions between them may be treated as events. Similarly, technical systems are also subject to events such as failure of measurement sensors, actuator saturation or violation of machine and plant operation limits. Such situations often are handled with a mixture of pulse abortion and iteratively improved pulse schedule reference programming. In case of protection-relevant events, however, the complexity of even a medium-sized device as ASDEX Upgrade requires a sophisticated and coordinated shutdown procedure rather than a simple stop of the pulse. The detection of events and their intelligent handling by the control system has been shown to be valuable also in terms of saving experiment time and cost. This paper outlines how ASDEX Upgrade's discharge control system (DCS) detects events and handles exceptions in two stages: locally and centrally. The goal of local exception handling is to limit the effect of an unexpected or asynchronous event to a minimal part of the controlled system. Thus, local exception handling facilitates robustness to failures but keeps the decision structures lean. A central state machine deals with exceptions requiring coordinated action of multiple control components. DCS implements the state machine by means of pulse schedule segments containing pre-programmed waveforms to define discharge goal and control

  8. Video mining using combinations of unsupervised and supervised learning techniques

    Science.gov (United States)

    Divakaran, Ajay; Miyahara, Koji; Peker, Kadir A.; Radhakrishnan, Regunathan; Xiong, Ziyou

    2003-12-01

    We discuss the meaning and significance of the video mining problem, and present our work on some aspects of video mining. A simple definition of video mining is unsupervised discovery of patterns in audio-visual content. Such purely unsupervised discovery is readily applicable to video surveillance as well as to consumer video browsing applications. We interpret video mining as content-adaptive or "blind" content processing, in which the first stage is content characterization and the second stage is event discovery based on the characterization obtained in stage 1. We discuss the target applications and find that using a purely unsupervised approach are too computationally complex to be implemented on our product platform. We then describe various combinations of unsupervised and supervised learning techniques that help discover patterns that are useful to the end-user of the application. We target consumer video browsing applications such as commercial message detection, sports highlights extraction etc. We employ both audio and video features. We find that supervised audio classification combined with unsupervised unusual event discovery enables accurate supervised detection of desired events. Our techniques are computationally simple and robust to common variations in production styles etc.

  9. Enhancing adverse drug event detection in electronic health records using molecular structure similarity: application to pancreatitis.

    Directory of Open Access Journals (Sweden)

    Santiago Vilar

    Full Text Available Adverse drug events (ADEs detection and assessment is at the center of pharmacovigilance. Data mining of systems, such as FDA's Adverse Event Reporting System (AERS and more recently, Electronic Health Records (EHRs, can aid in the automatic detection and analysis of ADEs. Although different data mining approaches have been shown to be valuable, it is still crucial to improve the quality of the generated signals.To leverage structural similarity by developing molecular fingerprint-based models (MFBMs to strengthen ADE signals generated from EHR data.A reference standard of drugs known to be causally associated with the adverse event pancreatitis was used to create a MFBM. Electronic Health Records (EHRs from the New York Presbyterian Hospital were mined to generate structured data. Disproportionality Analysis (DPA was applied to the data, and 278 possible signals related to the ADE pancreatitis were detected. Candidate drugs associated with these signals were then assessed using the MFBM to find the most promising candidates based on structural similarity.The use of MFBM as a means to strengthen or prioritize signals generated from the EHR significantly improved the detection accuracy of ADEs related to pancreatitis. MFBM also highlights the etiology of the ADE by identifying structurally similar drugs, which could follow a similar mechanism of action.The method proposed in this paper provides evidence of being a promising adjunct to existing automated ADE detection and analysis approaches.

  10. Automatic detection of confusion in elderly users of a web-based health instruction video

    NARCIS (Netherlands)

    Postma-Nilsenová, Marie; Postma, Eric; Tates, Kiek

    BACKGROUND: Because of cognitive limitations and lower health literacy, many elderly patients have difficulty understanding verbal medical instructions. Automatic detection of facial movements provides a nonintrusive basis for building technological tools supporting confusion detection in healthcare

  11. A novel adaptive, real-time algorithm to detect gait events from wearable sensors.

    Science.gov (United States)

    Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona

    2015-05-01

    A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices.

  12. Personalized Behavior Pattern Recognition and Unusual Event Detection for Mobile Users

    Directory of Open Access Journals (Sweden)

    Junho Ahn

    2013-01-01

    Full Text Available Mobile phones have become widely used for obtaining help in emergencies, such as accidents, crimes, or health emergencies. The smartphone is an essential device that can record emergency situations, which can be used for clues or evidence, or as an alert system in such situations. In this paper, we focus on mobile-based identification of potentially unusual, or abnormal events, occurring in a mobile user's daily behavior patterns. For purposes of this research, we have classified events as “unusual” for a mobile user when an event is an infrequently occurring one from the user's normal behavior patterns–all of which are collected and recorded on a user's mobile phone. We build a general unusual event classification model to be automated on the smartphone for use by any mobile phone users. To classify both normal and unusual events, we analyzed the activity, location, and audio sensor data collected from 20 mobile phone users to identify these users' personalized normal daily behavior patterns and any unusual events occurring in their daily activity. We used binary fusion classification algorithms on the subjects' recorded experimental data and ultimately identified the most accurately performing fusion algorithm for unusual event detection.

  13. Automatic detection of adverse events to predict drug label changes using text and data mining techniques.

    Science.gov (United States)

    Gurulingappa, Harsha; Toldo, Luca; Rajput, Abdul Mateen; Kors, Jan A; Taweel, Adel; Tayrouz, Yorki

    2013-11-01

    The aim of this study was to assess the impact of automatically detected adverse event signals from text and open-source data on the prediction of drug label changes. Open-source adverse effect data were collected from FAERS, Yellow Cards and SIDER databases. A shallow linguistic relation extraction system (JSRE) was applied for extraction of adverse effects from MEDLINE case reports. Statistical approach was applied on the extracted datasets for signal detection and subsequent prediction of label changes issued for 29 drugs by the UK Regulatory Authority in 2009. 76% of drug label changes were automatically predicted. Out of these, 6% of drug label changes were detected only by text mining. JSRE enabled precise identification of four adverse drug events from MEDLINE that were undetectable otherwise. Changes in drug labels can be predicted automatically using data and text mining techniques. Text mining technology is mature and well-placed to support the pharmacovigilance tasks. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Visual and Real-Time Event-Specific Loop-Mediated Isothermal Amplification Based Detection Assays for Bt Cotton Events MON531 and MON15985.

    Science.gov (United States)

    Randhawa, Gurinder Jit; Chhabra, Rashmi; Bhoge, Rajesh K; Singh, Monika

    2015-01-01

    Bt cotton events MON531 and MON15985 are authorized for commercial cultivation in more than 18 countries. In India, four Bt cotton events have been commercialized; more than 95% of total area under genetically modified (GM) cotton cultivation comprises events MON531 and MON15985. The present study reports on the development of efficient event-specific visual and real-time loop-mediated isothermal amplification (LAMP) assays for detection and identification of cotton events MON531 and MON15985. Efficiency of LAMP assays was compared with conventional and real-time PCR assays. Real-time LAMP assay was found time-efficient and most sensitive, detecting up to two target copies within 35 min. The developed real-time LAMP assays, when combined with efficient DNA extraction kit/protocol, may facilitate onsite GM detection to check authenticity of Bt cotton seeds.

  15. Detection of patient movement during CBCT examination using video observation compared with an accelerometer-gyroscope tracking system.

    Science.gov (United States)

    Spin-Neto, Rubens; Matzen, Louise H; Schropp, Lars; Gotfredsen, Erik; Wenzel, Ann

    2017-02-01

    To compare video observation (VO) with a novel three-dimensional registration method, based on an accelerometer-gyroscope (AG) system, to detect patient movement during CBCT examination. The movements were further analyzed according to complexity and patient age. In 181 patients (118 females/63 males; age average 30 years, range: 9-84 years), 206 CBCT examinations were performed, which were video-recorded during examination. An AG was, at the same time, attached to the patient head to track head position in three dimensions. Three observers scored patient movement (yes/no) by VO. AG provided movement data on the x-, y- and z-axes. Thresholds for AG-based registration were defined at 0.5, 1, 2, 3 and 4 mm (movement distance). Movement detected by VO was compared with that registered by AG, according to movement complexity (uniplanar vs multiplanar, as defined by AG) and patient age (≤15, 16-30 and ≥31 years). According to AG, movement ≥0.5 mm was present in 160 (77.7%) examinations. According to VO, movement was present in 46 (22.3%) examinations. One VO-detected movement was not registered by AG. Overall, VO did not detect 71.9% of the movements registered by AG at the 0.5-mm threshold. At a movement distance ≥4 mm, 20% of the AG-registered movements were not detected by VO. Multiplanar movements such as lateral head rotation (72.1%) and nodding/swallowing (52.6%) were more often detected by VO in comparison with uniplanar movements, such as head lifting (33.6%) and anteroposterior translation (35.6%), at the 0.5-mm threshold. The prevalence of patients who move was highest in patients younger than 16 years (64.3% for VO and 92.3% for AG-based registration at the 0.5-mm threshold). AG-based movement registration resulted in a higher prevalence of patient movement during CBCT examination than VO-based registration. Also, AG-registered multiplanar movements were more frequently detected by VO than uniplanar movements. The prevalence of patients who move

  16. Signal Detection of Adverse Drug Reaction of Amoxicillin Using the Korea Adverse Event Reporting System Database.

    Science.gov (United States)

    Soukavong, Mick; Kim, Jungmee; Park, Kyounghoon; Yang, Bo Ram; Lee, Joongyub; Jin, Xue Mei; Park, Byung Joo

    2016-09-01

    We conducted pharmacovigilance data mining for a β-lactam antibiotics, amoxicillin, and compare the adverse events (AEs) with the drug labels of 9 countries including Korea, USA, UK, Japan, Germany, Swiss, Italy, France, and Laos. We used the Korea Adverse Event Reporting System (KAERS) database, a nationwide database of AE reports, between December 1988 and June 2014. Frequentist and Bayesian methods were used to calculate disproportionality distribution of drug-AE pairs. The AE which was detected by all the three indices of proportional reporting ratio (PRR), reporting odds ratio (ROR), and information component (IC) was defined as a signal. The KAERS database contained a total of 807,582 AE reports, among which 1,722 reports were attributed to amoxicillin. Among the 192,510 antibiotics-AE pairs, the number of amoxicillin-AE pairs was 2,913. Among 241 AEs, 52 adverse events were detected as amoxicillin signals. Comparing the drug labels of 9 countries, 12 adverse events including ineffective medicine, bronchitis, rhinitis, sinusitis, dry mouth, gastroesophageal reflux, hypercholesterolemia, gastric carcinoma, abnormal crying, induration, pulmonary carcinoma, and influenza-like symptoms were not listed on any of the labels of nine countries. In conclusion, we detected 12 new signals of amoxicillin which were not listed on the labels of 9 countries. Therefore, it should be followed by signal evaluation including causal association, clinical significance, and preventability.

  17. Methods for the Detection of Adenosine-to-Inosine Editing Events in Cellular RNA.

    Science.gov (United States)

    Oakes, Eimile; Vadlamani, Pranathi; Hundley, Heather A

    2017-01-01

    Modification of RNA is essential for properly expressing the repertoire of RNA transcripts necessary for both cell type and developmental specific functions. RNA modifications serve to dynamically re-wire and fine-tune the genetic information carried by an invariable genome. One important type of RNA modification is RNA editing and the most common and well-studied type of RNA editing is the hydrolytic deamination of adenosine to inosine. Inosine is a biological mimic of guanosine; therefore, when RNA is reverse transcribed, inosine is recognized as guanosine by the reverse transcriptase and a cytidine is incorporated into the complementary DNA (cDNA) strand. During PCR amplification, guanosines pair with the newly incorporated cytidines. As a result, the adenosine-to-inosine (A-to-I) editing events are recognized as adenosine to guanosine changes when comparing the sequences of the genomic DNA to the cDNA. This chapter describes the methods for extracting endogenous RNA for subsequent analyses of A-to-I RNA editing using reverse transcriptase-based approaches. We discuss techniques for the detection of A-to-I RNA editing events in messenger RNA (mRNA), including analyzing editing levels at specific adenosines within the total pool of mRNA versus analyzing editing patterns that occur in individual transcripts and a method for detecting editing events across the entire transcriptome. The detection of RNA editing events and editing levels can be used to better understand normal biological processes and disease states.

  18. Semi-parametric Robust Event Detection for Massive Time-Domain Databases

    CERN Document Server

    Blocker, Alexander W

    2013-01-01

    The detection and analysis of events within massive collections of time-series has become an extremely important task for time-domain astronomy. In particular, many scientific investigations (e.g. the analysis of microlensing and other transients) begin with the detection of isolated events in irregularly-sampled series with both non-linear trends and non-Gaussian noise. We outline a semi-parametric, robust, parallel method for identifying variability and isolated events at multiple scales in the presence of the above complications. This approach harnesses the power of Bayesian modeling while maintaining much of the speed and scalability of more ad-hoc machine learning approaches. We also contrast this work with event detection methods from other fields, highlighting the unique challenges posed by astronomical surveys. Finally, we present results from the application of this method to 87.2 million EROS-2 sources, where we have obtained a greater than 100-fold reduction in candidates for certain types of pheno...

  19. Semi-automated camera trap image processing for the detection of ungulate fence crossing events.

    Science.gov (United States)

    Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija

    2017-09-27

    Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.

  20. Detection, tracking and event localization of jet stream features in 4-D atmospheric data

    Directory of Open Access Journals (Sweden)

    S. Limbach

    2012-04-01

    Full Text Available We introduce a novel algorithm for the efficient detection and tracking of features in spatiotemporal atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. The algorithm works on data given on a four-dimensional structured grid. Feature selection and clustering are based on adjustable local and global criteria, feature tracking is predominantly based on spatial overlaps of the feature's full volumes. The resulting 3-D features and the identified correspondences between features of consecutive time steps are represented as the nodes and edges of a directed acyclic graph, the event graph. Merging and splitting events appear in the event graph as nodes with multiple incoming or outgoing edges, respectively. The precise localization of the splitting events is based on a search for all grid points inside the initial 3-D feature that have a similar distance to two successive 3-D features of the next time step. The merging event is localized analogously, operating backward in time. As a first application of our method we present a climatology of upper-tropospheric jet streams and their events, based on four-dimensional wind speed data from European Centre for Medium-Range Weather Forecasts (ECMWF analyses. We compare our results with a climatology from a previous study, investigate the statistical distribution of the merging and splitting events, and illustrate the meteorological significance of the jet splitting events with a case study. A brief outlook is given on additional potential applications of the 4-D data segmentation technique.

  1. Semi-automated detection of fractional shortening in zebrafish embryo heart videos

    Directory of Open Access Journals (Sweden)

    Nasrat Sara

    2016-09-01

    Full Text Available Quantifying cardiac functions in model organisms like embryonic zebrafish is of high importance in small molecule screens for new therapeutic compounds. One relevant cardiac parameter is the fractional shortening (FS. A method for semi-automatic quantification of FS in video recordings of zebrafish embryo hearts is presented. The software provides automated visual information about the end-systolic and end-diastolic stages of the heart by displaying corresponding colored lines into a Motion-mode display. After manually marking the ventricle diameters in frames of end-systolic and end-diastolic stages, the FS is calculated. The software was evaluated by comparing the results of the determination of FS with results obtained from another established method. Correlations of 0.96 < r < 0.99 between the two methods were found indicating that the new software provides comparable results for the determination of the FS.

  2. Microfluidic Arrayed Lab-On-A-Chip for Electrochemical Capacitive Detection of DNA Hybridization Events.

    Science.gov (United States)

    Ben-Yoav, Hadar; Dykstra, Peter H; Bentley, William E; Ghodssi, Reza

    2017-01-01

    A microfluidic electrochemical lab-on-a-chip (LOC) device for DNA hybridization detection has been developed. The device comprises a 3 × 3 array of microelectrodes integrated with a dual layer microfluidic valved manipulation system that provides controlled and automated capabilities for high throughput analysis of microliter volume samples. The surface of the microelectrodes is functionalized with single-stranded DNA (ssDNA) probes which enable specific detection of complementary ssDNA targets. These targets are detected by a capacitive technique which measures dielectric variation at the microelectrode-electrolyte interface due to DNA hybridization events. A quantitative analysis of the hybridization events is carried out based on a sensing modeling that includes detailed analysis of energy storage and dissipation components. By calculating these components during hybridization events the device is able to demonstrate specific and dose response sensing characteristics. The developed microfluidic LOC for DNA hybridization detection offers a technology for real-time and label-free assessment of genetic markers outside of laboratory settings, such as at the point-of-care or in-field environmental monitoring.

  3. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    Science.gov (United States)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  4. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  5. Optimizing a neural network for detection of moving vehicles in video

    NARCIS (Netherlands)

    Fischer, N.M.; Kruithof, M.C.; Bouma, H.

    2017-01-01

    In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing,

  6. The MediaMill TRECVID 2011 semantic video search engine

    NARCIS (Netherlands)

    Snoek, C.G.M.; van de Sande, K.E.A.; Li, X.; Mazloom, M.; Jiang, Y.; Koelma, D.C.; Smeulders, A.W.M.

    2011-01-01

    In this paper we describe our TRECVID 2011 video retrieval experiments. The MediaMill team participated in two tasks: semantic indexing and multimedia event detection. The starting point for the MediaMill detection approach is our top-performing bag-of-words system of TRECVID 2010, which uses

  7. Automated Video Detection of Epileptic Convulsion Slowing as a Precursor for Post-Seizure Neuronal Collapse

    NARCIS (Netherlands)

    Kalitzin, S.N.; Bauer, P.R.; Lamberts, R.J.; Velis, D.N.; Thijs, R.D.; Lopes Da Silva, F.H.

    2016-01-01

    Automated monitoring and alerting for adverse events in people with epilepsy can provide higher security and quality of life for those who suffer from this debilitating condition. Recently, we found a relation between clonic slowing at the end of a convulsive seizure (CS) and the occurrence and

  8. AGILE Detection of a Candidate Gamma-Ray Precursor to the ICECUBE-160731 Neutrino Event

    Science.gov (United States)

    Lucarelli, F.; Pittori, C.; Verrecchia, F.; Donnarumma, I.; Tavani, M.; Bulgarelli, A.; Giuliani, A.; Antonelli, L. A.; Caraveo, P.; Cattaneo, P. W.; Colafrancesco, S.; Longo, F.; Mereghetti, S.; Morselli, A.; Pacciani, L.; Piano, G.; Pellizzoni, A.; Pilia, M.; Rappoldi, A.; Trois, A.; Vercellone, S.

    2017-09-01

    On 2016 July 31 the ICECUBE collaboration reported the detection of a high-energy starting event induced by an astrophysical neutrino. Here, we report on a search for a gamma-ray counterpart to the ICECUBE-160731 event, made with the AGILE satellite. No detection was found spanning the time interval of ±1 ks around the neutrino event time T 0 using the AGILE “burst search” system. Looking for a possible gamma-ray precursor in the results of the AGILE-GRID automatic Quick Look procedure over predefined 48-hr time bins, we found an excess above 100 MeV between 1 and 2 days before T 0, which is positionally consistent with the ICECUBE error circle, that has a post-trial significance of about 4σ . A refined data analysis of this excess confirms, a posteriori, the automatic detection. The new AGILE transient source, named AGL J1418+0008, thus stands as a possible ICECUBE-160731 gamma-ray precursor. No other space missions nor ground observatories have reported any detection of transient emission consistent with the ICECUBE event. We show that Fermi-LAT had a low exposure for the ICECUBE region during the AGILE gamma-ray transient. Based on an extensive search for cataloged sources within the error regions of ICECUBE-160731 and AGL J1418+0008, we find a possible common counterpart showing some of the key features associated with the high-energy peaked BL Lac (HBL) class of blazars. Further investigations on the nature of this source using dedicated SWIFT ToO data are presented.

  9. Composite Event Specification and Detection for Supporting Active Capability in an OODBMS: Semantics Architecture and Implementation.

    Science.gov (United States)

    1995-03-01

    terms of the ’AND’ operator and since this definition itself is questionable these operator semantics are also unclear. " The automaton for the ’AND...Proceedings 17th International Cono frencc on Very Large Data Bases, Barcelona ( Catalonia , Spain), Sept. 1.991. 65 [FM87] C. L. Forgy and J... Catalonia , Spain), Sep. 1991. [GJS92a] N. H. Gehani, H. V. Jagadish, and 0. Shmueli. COMPOSE A System For Composite Event Specification and Detection

  10. Accelerometer Detects Pump Thrombosis and Thromboembolic Events in an In vitro HVAD Circuit.

    Science.gov (United States)

    Schalit, Itai; Espinoza, Andreas; Pettersen, Fred-Johan; Thiara, Amrit P S; Karlsen, Hilde; Sørensen, Gro; Fosse, Erik; Fiane, Arnt E; Halvorsen, Per S

    2017-10-27

    Pump thrombosis and stroke are serious complications of left ventricular assist device (LVAD) support. The aim of this study was to test the ability of an accelerometer to detect pump thrombosis and thromboembolic events (TEs) using real-time analysis of pump vibrations. An accelerometer sensor was attached to a HeartWare HVAD and tested in three in vitro experiments using different pumps for each experiment. Each experiment included thrombi injections sized 0.2-1.0 mL and control interventions: pump speed change, afterload increase, preload decrease, and saline bolus injections. A spectrogram was calculated from the accelerometer signal, and the third harmonic amplitude was used to test the sensitivity and specificity of the method. The third harmonic amplitude was compared with the pump energy consumption. The acceleration signals were of high quality. A significant change was identified in the accelerometer third harmonic during the thromboembolic interventions. The third harmonic detected thromboembolic events with higher sensitivity/specificity than LVAD energy consumption: 92%/94% vs. 72%/58%, respectively. A total of 60% of thromboembolic events led to a prolonged third harmonic amplitude change, which is indicative of thrombus mass residue on the impeller. We concluded that there is strong evidence to support the feasibility of real-time continuous LVAD monitoring for thromboembolic events and pump thrombosis using an accelerometer. Further in vivo studies are needed to confirm these promising findings.

  11. Automated Feature and Event Detection with SDO AIA and HMI Data

    Science.gov (United States)

    Davey, Alisdair; Martens, P. C. H.; Attrill, G. D. R.; Engell, A.; Farid, S.; Grigis, P. C.; Kasper, J.; Korreck, K.; Saar, S. H.; Su, Y.; Testa, P.; Wills-Davey, M.; Savcheva, A.; Bernasconi, P. N.; Raouafi, N.-E.; Delouille, V. A.; Hochedez, J. F..; Cirtain, J. W.; Deforest, C. E.; Angryk, R. A.; de Moortel, I.; Wiegelmann, T.; Georgouli, M. K.; McAteer, R. T. J.; Hurlburt, N.; Timmons, R.

    The Solar Dynamics Observatory (SDO) represents a new frontier in quantity and quality of solar data. At about 1.5 TB/day, the data will not be easily digestible by solar physicists using the same methods that have been employed for images from previous missions. In order for solar scientists to use the SDO data effectively they need meta-data that will allow them to identify and retrieve data sets that address their particular science questions. We are building a comprehensive computer vision pipeline for SDO, abstracting complete metadata on many of the features and events detectable on the Sun without human intervention. Our project unites more than a dozen individual, existing codes into a systematic tool that can be used by the entire solar community. The feature finding codes will run as part of the SDO Event Detection System (EDS) at the Joint Science Operations Center (JSOC; joint between Stanford and LMSAL). The metadata produced will be stored in the Heliophysics Event Knowledgebase (HEK), which will be accessible on-line for the rest of the world directly or via the Virtual Solar Observatory (VSO) . Solar scientists will be able to use the HEK to select event and feature data to download for science studies.

  12. Signal Detection of Imipenem Compared to Other Drugs from Korea Adverse Event Reporting System Database.

    Science.gov (United States)

    Park, Kyounghoon; Soukavong, Mick; Kim, Jungmee; Kwon, Kyoung Eun; Jin, Xue Mei; Lee, Joongyub; Yang, Bo Ram; Park, Byung Joo

    2017-05-01

    To detect signals of adverse drug events after imipenem treatment using the Korea Institute of Drug Safety & Risk Management-Korea adverse event reporting system database (KIDS-KD). We performed data mining using KIDS-KD, which was constructed using spontaneously reported adverse event (AE) reports between December 1988 and June 2014. We detected signals calculated the proportional reporting ratio, reporting odds ratio, and information component of imipenem. We defined a signal as any AE that satisfied all three indices. The signals were compared with drug labels of nine countries. There were 807582 spontaneous AEs reports in the KIDS-KD. Among those, the number of antibiotics related AEs was 192510; 3382 reports were associated with imipenem. The most common imipenem-associated AE was the drug eruption; 353 times. We calculated the signal by comparing with all other antibiotics and drugs; 58 and 53 signals satisfied the three methods. We compared the drug labelling information of nine countries, including the USA, the UK, Japan, Italy, Switzerland, Germany, France, Canada, and South Korea, and discovered that the following signals were currently not included in drug labels: hypokalemia, cardiac arrest, cardiac failure, Parkinson's syndrome, myocardial infarction, and prostate enlargement. Hypokalemia was an additional signal compared with all other antibiotics, and the other signals were not different compared with all other antibiotics and all other drugs. We detected new signals that were not listed on the drug labels of nine countries. However, further pharmacoepidemiologic research is needed to evaluate the causality of these signals.

  13. Adverse event detection (AED) system for continuously monitoring and evaluating structural health status

    Science.gov (United States)

    Yun, Jinsik; Ha, Dong Sam; Inman, Daniel J.; Owen, Robert B.

    2011-03-01

    Structural damage for spacecraft is mainly due to impacts such as collision of meteorites or space debris. We present a structural health monitoring (SHM) system for space applications, named Adverse Event Detection (AED), which integrates an acoustic sensor, an impedance-based SHM system, and a Lamb wave SHM system. With these three health-monitoring methods in place, we can determine the presence, location, and severity of damage. An acoustic sensor continuously monitors acoustic events, while the impedance-based and Lamb wave SHM systems are in sleep mode. If an acoustic sensor detects an impact, it activates the impedance-based SHM. The impedance-based system determines if the impact incurred damage. When damage is detected, it activates the Lamb wave SHM system to determine the severity and location of the damage. Further, since an acoustic sensor dissipates much less power than the two SHM systems and the two systems are activated only when there is an acoustic event, our system reduces overall power dissipation significantly. Our prototype system demonstrates the feasibility of the proposed concept.

  14. Towards real-time change detection in videos based on existing 3D models

    Science.gov (United States)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  15. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  16. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  17. Presentation of the results of a Bayesian automatic event detection and localization program to human analysts

    Science.gov (United States)

    Kushida, N.; Kebede, F.; Feitio, P.; Le Bras, R.

    2016-12-01

    The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing and testing NET-VISA (Arora et al., 2013), a Bayesian automatic event detection and localization program, and evaluating its performance in a realistic operational mode. In our preliminary testing at the CTBTO, NET-VISA shows better performance than its currently operating automatic localization program. However, given CTBTO's role and its international context, a new technology should be introduced cautiously when it replaces a key piece of the automatic processing. We integrated the results of NET-VISA into the Analyst Review Station, extensively used by the analysts so that they can check the accuracy and robustness of the Bayesian approach. We expect the workload of the analysts to be reduced because of the better performance of NET-VISA in finding missed events and getting a more complete set of stations than the current system which has been operating for nearly twenty years. The results of a series of tests indicate that the expectations born from the automatic tests, which show an overall overlap improvement of 11%, meaning that the missed events rate is cut by 42%, hold for the integrated interactive module as well. New events are found by analysts, which qualify for the CTBTO Reviewed Event Bulletin, beyond the ones analyzed through the standard procedures. Arora, N., Russell, S., and Sudderth, E., NET-VISA: Network Processing Vertically Integrated Seismic Analysis, 2013, Bull. Seismol. Soc. Am., 103, 709-729.

  18. Detecting paralinguistic events in audio stream using context in features and probabilistic decisions.

    Science.gov (United States)

    Gupta, Rahul; Audhkhasi, Kartik; Lee, Sungbok; Narayanan, Shrikanth

    2016-03-01

    Non-verbal communication involves encoding, transmission and decoding of non-lexical cues and is realized using vocal (e.g. prosody) or visual (e.g. gaze, body language) channels during conversation. These cues perform the function of maintaining conversational flow, expressing emotions, and marking personality and interpersonal attitude. In particular, non-verbal cues in speech such as paralanguage and non-verbal vocal events (e.g. laughters, sighs, cries) are used to nuance meaning and convey emotions, mood and attitude. For instance, laughters are associated with affective expressions while fillers (e.g. um, ah, um) are used to hold floor during a conversation. In this paper we present an automatic non-verbal vocal events detection system focusing on the detect of laughter and fillers. We extend our system presented during Interspeech 2013 Social Signals Sub-challenge (that was the winning entry in the challenge) for frame-wise event detection and test several schemes for incorporating local context during detection. Specifically, we incorporate context at two separate levels in our system: (i) the raw frame-wise features and, (ii) the output decisions. Furthermore, our system processes the output probabilities based on a few heuristic rules in order to reduce erroneous frame-based predictions. Our overall system achieves an Area Under the Receiver Operating Characteristics curve of 95.3% for detecting laughters and 90.4% for fillers on the test set drawn from the data specifications of the Interspeech 2013 Social Signals Sub-challenge. We perform further analysis to understand the interrelation between the features and obtained results. Specifically, we conduct a feature sensitivity analysis and correlate it with each feature's stand alone performance. The observations suggest that the trained system is more sensitive to a feature carrying higher discriminability with implications towards a better system design.

  19. Detecting paralinguistic events in audio stream using context in features and probabilistic decisions☆

    Science.gov (United States)

    Gupta, Rahul; Audhkhasi, Kartik; Lee, Sungbok; Narayanan, Shrikanth

    2017-01-01

    Non-verbal communication involves encoding, transmission and decoding of non-lexical cues and is realized using vocal (e.g. prosody) or visual (e.g. gaze, body language) channels during conversation. These cues perform the function of maintaining conversational flow, expressing emotions, and marking personality and interpersonal attitude. In particular, non-verbal cues in speech such as paralanguage and non-verbal vocal events (e.g. laughters, sighs, cries) are used to nuance meaning and convey emotions, mood and attitude. For instance, laughters are associated with affective expressions while fillers (e.g. um, ah, um) are used to hold floor during a conversation. In this paper we present an automatic non-verbal vocal events detection system focusing on the detect of laughter and fillers. We extend our system presented during Interspeech 2013 Social Signals Sub-challenge (that was the winning entry in the challenge) for frame-wise event detection and test several schemes for incorporating local context during detection. Specifically, we incorporate context at two separate levels in our system: (i) the raw frame-wise features and, (ii) the output decisions. Furthermore, our system processes the output probabilities based on a few heuristic rules in order to reduce erroneous frame-based predictions. Our overall system achieves an Area Under the Receiver Operating Characteristics curve of 95.3% for detecting laughters and 90.4% for fillers on the test set drawn from the data specifications of the Interspeech 2013 Social Signals Sub-challenge. We perform further analysis to understand the interrelation between the features and obtained results. Specifically, we conduct a feature sensitivity analysis and correlate it with each feature's stand alone performance. The observations suggest that the trained system is more sensitive to a feature carrying higher discriminability with implications towards a better system design. PMID:28713197

  20. Digital disease detection: A systematic review of event-based internet biosurveillance systems.

    Science.gov (United States)

    O'Shea, Jesse

    2017-05-01

    Internet access and usage has changed how people seek and report health information. Meanwhile,infectious diseases continue to threaten humanity. The analysis of Big Data, or vast digital data, presents an opportunity to improve disease surveillance and epidemic intelligence. Epidemic intelligence contains two components: indicator based and event-based. A relatively new surveillance type has emerged called event-based Internet biosurveillance systems. These systems use information on events impacting health from Internet sources, such as social media or news aggregates. These systems circumvent the limitations of traditional reporting systems by being inexpensive, transparent, and flexible. Yet, innovations and the functionality of these systems can change rapidly. To update the current state of knowledge on event-based Internet biosurveillance systems by identifying all systems, including current functionality, with hopes to aid decision makers with whether to incorporate new methods into comprehensive programmes of surveillance. A systematic review was performed through PubMed, Scopus, and Google Scholar databases, while also including grey literature and other publication types. 50 event-based Internet systems were identified, including an extraction of 15 attributes for each system, described in 99 articles. Each system uses different innovative technology and data sources to gather data, process, and disseminate data to detect infectious disease outbreaks. The review emphasises the importance of using both formal and informal sources for timely and accurate infectious disease outbreak surveillance, cataloguing all event-based Internet biosurveillance systems. By doing so, future researchers will be able to use this review as a library for referencing systems, with hopes of learning, building, and expanding Internet-based surveillance systems. Event-based Internet biosurveillance should act as an extension of traditional systems, to be utilised as an

  1. Final Scientific Report, Integrated Seismic Event Detection and Location by Advanced Array Processing

    Energy Technology Data Exchange (ETDEWEB)

    Kvaerna, T.; Gibbons. S.J.; Ringdal, F; Harris, D.B.

    2007-01-30

    In the field of nuclear explosion monitoring, it has become a priority to detect, locate, and identify seismic events down to increasingly small magnitudes. The consideration of smaller seismic events has implications for a reliable monitoring regime. Firstly, the number of events to be considered increases greatly; an exponential increase in naturally occurring seismicity is compounded by large numbers of seismic signals generated by human activity. Secondly, the signals from smaller events become more difficult to detect above the background noise and estimates of parameters required for locating the events may be subject to greater errors. Thirdly, events are likely to be observed by a far smaller number of seismic stations, and the reliability of event detection and location using a very limited set of observations needs to be quantified. For many key seismic stations, detection lists may be dominated by signals from routine industrial explosions which should be ascribed, automatically and with a high level of confidence, to known sources. This means that expensive analyst time is not spent locating routine events from repeating seismic sources and that events from unknown sources, which could be of concern in an explosion monitoring context, are more easily identified and can be examined with due care. We have obtained extensive lists of confirmed seismic events from mining and other artificial sources which have provided an excellent opportunity to assess the quality of existing fully-automatic event bulletins and to guide the development of new techniques for online seismic processing. Comparing the times and locations of confirmed events from sources in Fennoscandia and NW Russia with the corresponding time and location estimates reported in existing automatic bulletins has revealed substantial mislocation errors which preclude a confident association of detected signals with known industrial sources. The causes of the errors are well understood and are

  2. Detection of short-term slow slip events along the Nankai Trough via groundwater observations

    Science.gov (United States)

    Kitagawa, Yuichi; Koizumi, Naoji

    2013-12-01

    order to develop new tools or techniques to detect short-term slow slip events (S-SSEs) along subduction zones, we attempted to detect S-SSEs by conducting groundwater pressure observations. At ANO station, which is a groundwater observation station operated by the Geological Survey of Japan, the National Institute of Advanced Industrial Science and Technology, for earthquake prediction research, groundwater pressures changed due to six S-SSEs that occurred near ANO from June 2011 to April in 2013. The fault models of these S-SSEs, which were estimated mainly by observing the crustal strains and tilts, explained the changes in the groundwater pressures. If the strain sensitivity of the observed groundwater pressure or level is larger than 1 mm/nstrain and the noise level is smaller than 50 mm/day, it is possible to detect S-SSEs that occur in southwest Japan by conducting groundwater pressure or level observations.

  3. Multiscale vision model for event detection and reconstruction in two-photon imaging data

    DEFF Research Database (Denmark)

    Brazhe, Alexey; Mathiesen, Claus; Lind, Barbara Lykke

    2014-01-01

    on a modified multiscale vision model, an object detection framework based on the thresholding of wavelet coefficients and hierarchical trees of significant coefficients followed by nonlinear iterative partial object reconstruction, for the analysis of two-photon calcium imaging data. The framework is discussed...... of the multiscale vision model is similar in the denoising, but provides a better segmenation of the image into meaningful objects, whereas other methods need to be combined with dedicated thresholding and segmentation utilities.......Reliable detection of calcium waves in multiphoton imaging data is challenging because of the low signal-to-noise ratio and because of the unpredictability of the time and location of these spontaneous events. This paper describes our approach to calcium wave detection and reconstruction based...

  4. High-Performance Signal Detection for Adverse Drug Events using MapReduce Paradigm.

    Science.gov (United States)

    Fan, Kai; Sun, Xingzhi; Tao, Ying; Xu, Linhao; Wang, Chen; Mao, Xianling; Peng, Bo; Pan, Yue

    2010-11-13

    Post-marketing pharmacovigilance is important for public health, as many Adverse Drug Events (ADEs) are unknown when those drugs were approved for marketing. However, due to the large number of reported drugs and drug combinations, detecting ADE signals by mining these reports is becoming a challenging task in terms of computational complexity. Recently, a parallel programming model, MapReduce has been introduced by Google to support large-scale data intensive applications. In this study, we proposed a MapReduce-based algorithm, for common ADE detection approach, Proportional Reporting Ratio (PRR), and tested it in mining spontaneous ADE reports from FDA. The purpose is to investigate the possibility of using MapReduce principle to speed up biomedical data mining tasks using this pharmacovigilance case as one specific example. The results demonstrated that MapReduce programming model could improve the performance of common signal detection algorithm for pharmacovigilance in a distributed computation environment at approximately liner speedup rates.

  5. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.

    Science.gov (United States)

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter.

  6. DETECT: A MATLAB Toolbox for Event Detection and Identification in Time Series, with Applications to Artifact Detection in EEG Signals

    Science.gov (United States)

    2013-04-24

    newborn infants [5] as well as the monitoring of fatigue in prolonged driving simulations [6]. In many of these settings, the experiments may last...and duration could be used to monitor subject performance during the task, as these features have been linked to drowsiness and fatigue [3]. Deviations...Shyh Lin, Shao-Hang Hung, Chih-Feng Chao, et al. (2010) A Real-Time Wireless Brain-Computer Interface System for Drowsiness Detection. IEEE Transactions

  7. Event Detection Using Mobile Phone Mass GPS Data and Their Reliavility Verification by Dmsp/ols Night Light Image

    Science.gov (United States)

    Yuki, Akiyama; Satoshi, Ueyama; Ryosuke, Shibasaki; Adachi, Ryuichiro

    2016-06-01

    In this study, we developed a method to detect sudden population concentration on a certain day and area, that is, an "Event," all over Japan in 2012 using mass GPS data provided from mobile phone users. First, stay locations of all phone users were detected using existing methods. Second, areas and days where Events occurred were detected by aggregation of mass stay locations into 1-km-square grid polygons. Finally, the proposed method could detect Events with an especially large number of visitors in the year by removing the influences of Events that occurred continuously throughout the year. In addition, we demonstrated reasonable reliability of the proposed Event detection method by comparing the results of Event detection with light intensities obtained from the night light images from the DMSP/OLS night light images. Our method can detect not only positive events such as festivals but also negative events such as natural disasters and road accidents. These results are expected to support policy development of urban planning, disaster prevention, and transportation management.

  8. [Detection of adverse events in hospitalized adult patients by using the Global Trigger Tool method].

    Science.gov (United States)

    Guzmán-Ruiz, O; Ruiz-López, P; Gómez-Cámara, A; Ramírez-Martín, M

    2015-01-01

    To identify and characterize adverse events (AE) in an Internal Medicine Department of a district hospital using an extension of the Global Trigger Tool (GTT), analyzing the diagnostic validity of the tool. An observational, analytical, descriptive and retrospective study was conducted on 2013 clinical charts from an Internal Medicine Department in order to detect EA through the identification of 'triggers' (an event often related to an AE). The 'triggers' and AE were located by systematic review of clinical documentation. The AE were characterized after they were identified. A total of 149 AE were detected in 291 clinical charts during 2013, of which 75.3% were detected directly by the tool, while the rest were not associated with a trigger. The percentage of charts that had at least one AE was 35.4%. The most frequent AE found was pressure ulcer (12%), followed by delirium, constipation, nosocomial respiratory infection and altered level of consciousness by drugs. Almost half (47.6%) of the AE were related to drug use, and 32.2% of all AE were considered preventable. The tool demonstrated a sensitivity of 91.3% (95%CI: 88.9-93.2) and a specificity of 32.5% (95%CI: 29.9-35.1). It had a positive predictive value of 42.5% (95%CI: 40.1-45.1) and a negative predictive value of 87.1% (95%CI: 83.8-89.9). The tool used in this study is valid, useful and reproducible for the detection of AE. It also serves to determine rates of injury and to observe their progression over time. A high frequency of both AE and preventable events were observed in this study. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.

  9. Automatic lameness detection based on consecutive 3D-video recordings

    NARCIS (Netherlands)

    Hertem, van T.; Viazzi, S.; Steensels, M.; Maltz, E.; Antler, A.; Alchanatis, V.; Schlageter-Tello, A.; Lokhorst, C.; Romanini, C.E.B.; Bahr, C.; Berckmans, D.; Halachmi, I.

    2014-01-01

    Manual locomotion scoring for lameness detection is a time-consuming and subjective procedure. Therefore, the objective of this study is to optimise the classification output of a computer vision based algorithm for automated lameness scoring. Cow gait recordings were made during four consecutive

  10. Detection of distorted frames in retinal video-sequences via machine learning

    Science.gov (United States)

    Kolar, Radim; Liberdova, Ivana; Odstrcilik, Jan; Hracho, Michal; Tornow, Ralf P.

    2017-07-01

    This paper describes detection of distorted frames in retinal sequences based on set of global features extracted from each frame. The feature vector is consequently used in classification step, in which three types of classifiers are tested. The best classification accuracy 96% has been achieved with support vector machine approach.

  11. Clinical outcome of subchromosomal events detected by whole‐genome noninvasive prenatal testing

    Science.gov (United States)

    Helgeson, J.; Wardrop, J.; Boomer, T.; Almasri, E.; Paxton, W. B.; Saldivar, J. S.; Dharajiya, N.; Monroe, T. J.; Farkas, D. H.; Grosu, D. S.

    2015-01-01

    Abstract Objective A novel algorithm to identify fetal microdeletion events in maternal plasma has been developed and used in clinical laboratory‐based noninvasive prenatal testing. We used this approach to identify the subchromosomal events 5pdel, 22q11del, 15qdel, 1p36del, 4pdel, 11qdel, and 8qdel in routine testing. We describe the clinical outcomes of those samples identified with these subchromosomal events. Methods Blood samples from high‐risk pregnant women submitted for noninvasive prenatal testing were analyzed using low coverage whole genome massively parallel sequencing. Sequencing data were analyzed using a novel algorithm to detect trisomies and microdeletions. Results In testing 175 393 samples, 55 subchromosomal deletions were reported. The overall positive predictive value for each subchromosomal aberration ranged from 60% to 100% for cases with diagnostic and clinical follow‐up information. The total false positive rate was 0.0017% for confirmed false positives results; false negative rate and sensitivity were not conclusively determined. Conclusion Noninvasive testing can be expanded into the detection of subchromosomal copy number variations, while maintaining overall high test specificity. In the current setting, our results demonstrate high positive predictive values for testing of rare subchromosomal deletions. © 2015 The Authors. Prenatal Diagnosis published by John Wiley & Sons Ltd. PMID:26088833

  12. Real-Time Straight-Line Detection for XGA-Size Videos by Hough Transform with Parallelized Voting Procedures.

    Science.gov (United States)

    Guan, Jungang; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Mattausch, Hans Jürgen

    2017-01-30

    The Hough Transform (HT) is a method for extracting straight lines from an edge image. The main limitations of the HT for usage in actual applications are computation time and storage requirements. This paper reports a hardware architecture for HT implementation on a Field Programmable Gate Array (FPGA) with parallelized voting procedure. The 2-dimensional accumulator array, namely the Hough space in parametric form (ρ, θ), for computing the strength of each line by a voting mechanism is mapped on a 1-dimensional array with regular increments of θ. Then, this Hough space is divided into a number of parallel parts. The computation of (ρ, θ) for the edge pixels and the voting procedure for straight-line determination are therefore executable in parallel. In addition, a synchronized initialization for the Hough space further increases the speed of straight-line detection, so that XGA video processing becomes possible. The designed prototype system has been synthesized on a DE4 platform with a Stratix-IV FPGA device. In the application of road-lane detection, the average processing speed of this HT implementation is 5.4ms per XGA-frame at 200 MHz working frequency.

  13. Real-Time Straight-Line Detection for XGA-Size Videos by Hough Transform with Parallelized Voting Procedures

    Directory of Open Access Journals (Sweden)

    Jungang Guan

    2017-01-01

    Full Text Available The Hough Transform (HT is a method for extracting straight lines from an edge image. The main limitations of the HT for usage in actual applications are computation time and storage requirements. This paper reports a hardware architecture for HT implementation on a Field Programmable Gate Array (FPGA with parallelized voting procedure. The 2-dimensional accumulator array, namely the Hough space in parametric form (ρ, θ, for computing the strength of each line by a voting mechanism is mapped on a 1-dimensional array with regular increments of θ. Then, this Hough space is divided into a number of parallel parts. The computation of (ρ, θ for the edge pixels and the voting procedure for straight-line determination are therefore executable in parallel. In addition, a synchronized initialization for the Hough space further increases the speed of straight-line detection, so that XGA video processing becomes possible. The designed prototype system has been synthesized on a DE4 platform with a Stratix-IV FPGA device. In the application of road-lane detection, the average processing speed of this HT implementation is 5.4ms per XGA-frame at 200 MHz working frequency.

  14. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... a patient kit Keywords Join/Renew Programs Back Support Groups Is a support group for me? Find ... Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find ...

  15. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN EVENTS DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video Scott at the Grand Canyon ...

  16. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Is a support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer Support Program Community Connections Overview Find a Meeting ...

  17. High-Speed Video System for Micro-Expression Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Diana Borza

    2017-12-01

    Full Text Available Micro-expressions play an essential part in understanding non-verbal communication and deceit detection. They are involuntary, brief facial movements that are shown when a person is trying to conceal something. Automatic analysis of micro-expression is challenging due to their low amplitude and to their short duration (they occur as fast as 1/15 to 1/25 of a second. We propose a fully micro-expression analysis system consisting of a high-speed image acquisition setup and a software framework which can detect the frames when the micro-expressions occurred as well as determine the type of the emerged expression. The detection and classification methods use fast and simple motion descriptors based on absolute image differences. The recognition module it only involves the computation of several 2D Gaussian probabilities. The software framework was tested on two publicly available high speed micro-expression databases and the whole system was used to acquire new data. The experiments we performed show that our solution outperforms state of the art works which use more complex and computationally intensive descriptors.

  18. Dynamic detection of abnormalities in video analysis of crowd behavior with DBSCAN and neural networks

    Directory of Open Access Journals (Sweden)

    Hocine Chebi

    2016-10-01

    Full Text Available Visual analysis of human behavior is a broad field within computer vision. In this field of work, we are interested in dynamic methods in the analysis of crowd behavior which consist in detecting the abnormal entities in a group in a dense scene. These scenes are characterized by the presence of a great number of people in the camera’s field of vision. The major problem is the development of an autonomous approach for the management of a great number of anomalies which is almost impossible to carry out by human operators. We present in this paper a new approach for the detection of dynamic anomalies of very dense scenes measuring the speed of both the individuals and the whole group. The various anomalies are detected by dynamically switching between two approaches: An artificial neural network (ANN for the management of group anomalies of people, and a Density-Based Spatial Clustering of Application with Noise (DBSCAN in the case of entities. For greater robustness and effectiveness, we introduced two routines that serve to eliminate the shades and the management of occlusions. The two latter phases have proven that the results of the simulation are comparable to existing work.

  19. Detection of genetically modified maize events in Brazilian maize-derived food products

    Directory of Open Access Journals (Sweden)

    Maria Regina Branquinho

    2013-09-01

    Full Text Available The Brazilian government has approved many transgenic maize lines for commercialization and has established a threshold of 1% for food labeling, which underscores need for monitoring programs. Thirty four samples including flours and different types of nacho chips were analyzed by conventional and real-time PCR in 2011 and 2012. The events MON810, Bt11, and TC1507 were detected in most of the samples, and NK603 was present only in the samples analyzed in 2012. The authorized lines GA21, T25, and the unauthorized Bt176 were not detected. All positive samples in the qualitative tests collected in 2011 showed a transgenic content higher than 1%, and none of them was correctly labeled. Regarding the samples collected in 2012, all positive samples were quantified higher than the threshold, and 47.0% were not correctly labeled. The overall results indicated that the major genetically modified organisms detected were MON810, TC1507, Bt11, and NK603 events. Some industries that had failed to label their products in 2011 started labeling them in 2012, demonstrating compliance with the current legislation observing the consumer rights. Although these results are encouraging, it has been clearly demonstrated the need for continuous monitoring programs to ensure consumers that food products are labeled properly.

  20. Application of Data Cubes for Improving Detection of Water Cycle Extreme Events

    Science.gov (United States)

    Albayrak, Arif; Teng, William

    2015-01-01

    As part of an ongoing NASA-funded project to remove a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series), for the hydrology and other point-time series-oriented communities, "data cubes" are created from which time series files (aka "data rods") are generated on-the-fly and made available as Web services from the Goddard Earth Sciences Data and Information Services Center (GES DISC). Data cubes are data as archived rearranged into spatio-temporal matrices, which allow for easy access to the data, both spatially and temporally. A data cube is a specific case of the general optimal strategy of reorganizing data to match the desired means of access. The gain from such reorganization is greater the larger the data set. As a use case of our project, we are leveraging existing software to explore the application of the data cubes concept to machine learning, for the purpose of detecting water cycle extreme events, a specific case of anomaly detection, requiring time series data. We investigate the use of support vector machines (SVM) for anomaly classification. We show an example of detection of water cycle extreme events, using data from the Tropical Rainfall Measuring Mission (TRMM).

  1. Advanced Clinical Decision Support for Vaccine Adverse Event Detection and Reporting.

    Science.gov (United States)

    Baker, Meghan A; Kaelber, David C; Bar-Shain, David S; Moro, Pedro L; Zambarano, Bob; Mazza, Megan; Garcia, Crystal; Henry, Adam; Platt, Richard; Klompas, Michael

    2015-09-15

    Reporting of adverse events (AEs) following vaccination can help identify rare or unexpected complications of immunizations and aid in characterizing potential vaccine safety signals. We developed an open-source, generalizable clinical decision support system called Electronic Support for Public Health-Vaccine Adverse Event Reporting System (ESP-VAERS) to assist clinicians with AE detection and reporting. ESP-VAERS monitors patients' electronic health records for new diagnoses, changes in laboratory values, and new allergies following vaccinations. When suggestive events are found, ESP-VAERS sends the patient's clinician a secure electronic message with an invitation to affirm or refute the message, add comments, and submit an automated, prepopulated electronic report to VAERS. High-probability AEs are reported automatically if the clinician does not respond. We implemented ESP-VAERS in December 2012 throughout the MetroHealth System, an integrated healthcare system in Ohio. We queried the VAERS database to determine MetroHealth's baseline reporting rates from January 2009 to March 2012 and then assessed changes in reporting rates with ESP-VAERS. In the 8 months following implementation, 91 622 vaccinations were given. ESP-VAERS sent 1385 messages to responsible clinicians describing potential AEs. Clinicians opened 1304 (94.2%) messages, responded to 209 (15.1%), and confirmed 16 for transmission to VAERS. An additional 16 high-probability AEs were sent automatically. Reported events included seizure, pleural effusion, and lymphocytopenia. The odds of a VAERS report submission during the implementation period were 30.2 (95% confidence interval, 9.52-95.5) times greater than the odds during the comparable preimplementation period. An open-source, electronic health record-based clinical decision support system can increase AE detection and reporting rates in VAERS. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society

  2. Detection of transgenic events in maize using immunochromatographic strip test and conventional PCR

    Directory of Open Access Journals (Sweden)

    Narjara Fonseca Cantelmo

    2013-10-01

    Full Text Available With the growth in the transgenic market, fast and economically viable methodologies are necessary for undertaking transgene detection tests, both for identification of contamination in seeds and in grain. Seeds from commercial conventional GNZ 2004, and transgenic VT-Pro (MON89034, Roundup Ready (NK603 and Herculex (TC1507 maize cultivars were used. In order to simulate different levels of contamination, the transgenic seeds were mixed with conventional seeds at levels of 0.2%, 0.4%, 1.0% and 1.6% for VT-Pro, and 0.2%, 0.5%, 0.8% and 1.2% for Roundup Ready and Herculex. The lateral flow membrane strip test was performed in the whole seed, endosperm and embryo. For evaluation of the specificity of the technique in detection of the TC1507 event by means of the conventional PCR technique, seeds of the commercial maize hybrid GNZ 2004 were used as negative control, and the maize hybrid 2B655Hx as positive control. In order to simulate different levels of contamination, transgenic seeds were mixed with conventional seeds at the levels of 10%, 5%, 1%, 0.5% and 0.1%. Seeds from each sample were crushed, and then DNA extraction was performed by the CTAB 2% method. Using the immunochromatographic strip, it was possible to evaluate the expression of proteins related to the VT-Pro, Roundup Ready and Herculex events when whole seeds were used at the 0.2% level of contamination, whereas by the conventional PCR technique, it was possible to detect the TC1507 event in samples with 1% contamination.

  3. Hierarchical modeling for rare event detection and cell subset alignment across flow cytometry samples.

    Directory of Open Access Journals (Sweden)

    Andrew Cron

    Full Text Available Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less. Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing enrichment, and the ability to align cell subsets across multiple data samples for comparative analysis. In this manuscript, we develop hierarchical modeling extensions to the Dirichlet Process Gaussian Mixture Model (DPGMM approach we have previously described for cell subset identification, and show that the hierarchical DPGMM (HDPGMM naturally generates an aligned data model that captures both commonalities and variations across multiple samples. HDPGMM also increases the sensitivity to extremely low frequency events by sharing information across multiple samples analyzed simultaneously. We validate the accuracy and reproducibility of HDPGMM estimates of antigen-specific T cells on clinically relevant reference peripheral blood mononuclear cell (PBMC samples with known frequencies of antigen-specific T cells. These cell samples take advantage of retrovirally TCR-transduced T cells spiked into autologous PBMC samples to give a defined number of antigen-specific T cells detectable by HLA-peptide multimer binding. We provide open source software that can take advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding computations. We show that hierarchical modeling is a useful probabilistic approach that can provide a

  4. Analysis of arrhythmic events is useful to detect lead failure earlier in patients followed by remote monitoring.

    Science.gov (United States)

    Nishii, Nobuhiro; Miyoshi, Akihito; Kubo, Motoki; Miyamoto, Masakazu; Morimoto, Yoshimasa; Kawada, Satoshi; Nakagawa, Koji; Watanabe, Atsuyuki; Nakamura, Kazufumi; Morita, Hiroshi; Ito, Hiroshi

    2017-12-01

    Remote monitoring (RM) has been advocated as the new standard of care for patients with cardiovascular implantable electronic devices (CIEDs). RM has allowed the early detection of adverse clinical events, such as arrhythmia, lead failure, and battery depletion. However, lead failure was often identified only by arrhythmic events, but not impedance abnormalities. To compare the usefulness of arrhythmic events with conventional impedance abnormalities for identifying lead failure in CIED patients followed by RM. CIED patients in 12 hospitals have been followed by the RM center in Okayama University Hospital. All transmitted data have been analyzed and summarized. From April 2009 to March 2016, 1,873 patients have been followed by the RM center. During the mean follow-up period of 775 days, 42 lead failure events (atrial lead 22, right ventricular pacemaker lead 5, implantable cardioverter defibrillator [ICD] lead 15) were detected. The proportion of lead failures detected only by arrhythmic events, which were not detected by conventional impedance abnormalities, was significantly higher than that detected by impedance abnormalities (arrhythmic event 76.2%, 95% CI: 60.5-87.9%; impedance abnormalities 23.8%, 95% CI: 12.1-39.5%). Twenty-seven events (64.7%) were detected without any alert. Of 15 patients with ICD lead failure, none has experienced inappropriate therapy. RM can detect lead failure earlier, before clinical adverse events. However, CIEDs often diagnose lead failure as just arrhythmic events without any warning. Thus, to detect lead failure earlier, careful human analysis of arrhythmic events is useful. © 2017 Wiley Periodicals, Inc.

  5. Improving Infrasound Signal Detection and Event Location in the Western US Using Atmospheric Modeling

    Science.gov (United States)

    Dannemann, F. K.; Park, J.; Marcillo, O. E.; Blom, P. S.; Stump, B. W.; Hayward, C.

    2016-12-01

    Data from five infrasound arrays in the western US jointly operated by University of Utah Seismograph Station and Southern Methodist University are used to test a database-centric processing pipeline, InfraPy, for automated event detection, association and location. Infrasonic array data from a one-year time period (January 1 2012 to December 31 2012) are used. This study focuses on the identification and location of 53 ground-truth verified events produced from near surface military explosions at the Utah Test and Training Range (UTTR). Signals are detected using an adaptive F-detector, which accounts for correlated and uncorrelated time-varying noise in order to reduce false detections due to the presence of coherent noise. Variations in detection azimuth and correlation are found to be consistent with seasonal changes in atmospheric winds. The Bayesian infrasonic source location (BISL) method is used to produce source location and time credibility contours based on posterior probability density functions. Updates to the previous BISL methodology include the application of celerity range and azimuth deviation distributions in order to accurately account for the spatial and temporal variability of infrasound propagation through the atmosphere. These priors are estimated by ray tracing through Ground-to-Space (G2S) atmospheric models as a function of season and time of day using historic atmospheric characterizations from 2007 to 2013. Out of the 53 events, 31 are successfully located using the InfraPy pipeline. Confidence contour areas for maximum a posteriori event locations produce error estimates which are reduced a maximum of 98% and an average of 25% from location estimates utilizing a simple time independent uniform atmosphere. We compare real-time ray tracing results with the statistical atmospheric priors used in this study to examine large time differences between known origin times and estimated origin times that might be due to the misidentification of

  6. Real-time movement detection and analysis for video surveillance applications

    Science.gov (United States)

    Hueber, Nicolas; Hennequin, Christophe; Raymond, Pierre; Moeglin, Jean-Pierre

    2014-06-01

    Pedestrian movement along critical infrastructures like pipes, railways or highways, is of major interest in surveillance applications as well as its behavior in urban environment. The goal is to anticipate illicit or dangerous human activities. For this purpose, we propose an all-in-one small autonomous system which delivers high level statistics and reports alerts in specific cases. This situational awareness project leads us to manage efficiently the scene by performing movement analysis. A dynamic background extraction algorithm is developed to reach the degree of robustness against natural and urban environment perturbations and also to match the embedded implementation constraints. When changes are detected in the scene, specific patterns are applied to detect and highlight relevant movements. Depending on the applications, specific descriptors can be extracted and fused in order to reach a high level of interpretation. In this paper, our approach is applied to two operational use cases: pedestrian urban statistics and railway surveillance. In the first case, a grid of prototypes is deployed over a city centre to collect pedestrian movement statistics up to a macroscopic level of analysis. The results demonstrate the relevance of the delivered information; in particular, the flow density map highlights pedestrian preferential paths along the streets. In the second case, one prototype is set next to high speed train tracks to secure the area. The results exhibit a low false alarm rate and assess our approach of a large sensor network for delivering a precise operational picture without overwhelming a supervisor.

  7. Gait event detection on level ground and incline walking using a rate gyroscope.

    Science.gov (United States)

    Catalfamo, Paola; Ghoussayni, Salim; Ewins, David

    2010-01-01

    Gyroscopes have been proposed as sensors for ambulatory gait analysis and functional electrical stimulation systems. Accurate determination of the Initial Contact of the foot with the floor (IC) and the final contact or Foot Off (FO) on different terrains is important. This paper describes the evaluation of a gyroscope placed on the shank for determination of IC and FO in subjects walking outdoors on level ground, and up and down an incline. Performance was compared with a reference pressure measurement system. The mean difference between the gyroscope and the reference was less than -25 ms for IC and less than 75 ms for FO for all terrains. Detection success was over 98%. These results provide preliminary evidence supporting the use of the gyroscope for gait event detection on inclines as well as level walking.

  8. The ADE scorecards: a tool for adverse drug event detection in electronic health records.

    Science.gov (United States)

    Chazard, Emmanuel; Băceanu, Adrian; Ferret, Laurie; Ficheur, Grégoire

    2011-01-01

    Although several methods exist for Adverse Drug events (ADE) detection due to past hospitalizations, a tool that could display those ADEs to the physicians does not exist yet. This article presents the ADE Scorecards, a Web tool that enables to screen past hospitalizations extracted from Electronic Health Records (EHR), using a set of ADE detection rules, presently rules discovered by data mining. The tool enables the physicians to (1) get contextualized statistics about the ADEs that happen in their medical department, (2) see the rules that are useful in their department, i.e. the rules that could have enabled to prevent those ADEs and (3) review in detail the ADE cases, through a comprehensive interface displaying the diagnoses, procedures, lab results, administered drugs and anonymized records. The article shows a demonstration of the tool through a use case.

  9. Optimized Swinging Door Algorithm for Wind Power Ramp Event Detection: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-06

    Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.

  10. Single Event Upset Detection and Hardening schemes for CNTFET SRAM – A Review

    Directory of Open Access Journals (Sweden)

    T.R.Rajalakshmi

    2015-12-01

    Full Text Available Carbon nanotubes (CNT provide a better alternative of silicon, when it comes to nano scales. Thanks to its high stability and high performance of carbon nanotube, CNT based FET (CNTFET devices which are gaining popularity of late. Single Event Upset (SEU in a device is caused due to radiation. Radiation can be through two ways, one due to charge particles present in the atmosphere and other due to alpha particles. In this article we review some of the detection and hardening schemes in CMOS SRAM and make related simulations on CNTFET SRAM. The aim of this paper is to present the challenges the CNTFET SRAM is facing when the radiation effects are introduced. A full experimentation of all the schemes of detection and correction schemes will be beyond the scope, so only certain experiments that can be well carried out with CNTFET SRAM memory is more focussed.

  11. Keyframe labeling technique for surveillance event classification

    Science.gov (United States)

    Şaykol, Ediz; Baştan, Muhammet; Güdükbay, Uğur; Ulusoy, Özgür

    2010-11-01

    The huge amount of video data generated by surveillance systems necessitates the use of automatic tools for their efficient analysis, indexing, and retrieval. Automated access to the semantic content of surveillance videos to detect anomalous events is among the basic tasks; however, due to the high variability of the audio-visual features and large size of the video input, it still remains a challenging task, though a considerable amount of research dealing with automated access to video surveillance has appeared in the literature. We propose a keyframe labeling technique, especially for indoor environments, which assigns labels to keyframes extracted by a keyframe detection algorithm, and hence transforms the input video to an event-sequence representation. This representation is used to detect unusual behaviors, such as crossover, deposit, and pickup, with the help of three separate mechanisms based on finite state automata. The keyframes are detected based on a grid-based motion representation of the moving regions, called the motion appearance mask. It has been shown through performance experiments that the keyframe labeling algorithm significantly reduces the storage requirements and yields reasonable event detection and classification performance.

  12. Endpoint Visual Detection of Three Genetically Modified Rice Events by Loop-Mediated Isothermal Amplification

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2012-11-01

    Full Text Available Genetically modified (GM rice KMD1, TT51-1, and KF6 are three of the most well known transgenic Bt rice lines in China. A rapid and sensitive molecular assay for risk assessment of GM rice is needed. Polymerase chain reaction (PCR, currently the most common method for detecting genetically modified organisms, requires temperature cycling and relatively complex procedures. Here we developed a visual and rapid loop-mediated isothermal amplification (LAMP method to amplify three GM rice event-specific junction sequences. Target DNA was amplified and visualized by two indicators (SYBR green or hydroxy naphthol blue [HNB] within 60 min at an isothermal temperature of 63 °C. Different kinds of plants were selected to ensure the specificity of detection and the results of the non-target samples were negative, indicating that the primer sets for the three GM rice varieties had good levels of specificity. The sensitivity of LAMP, with detection limits at low concentration levels (0.01%–0.005% GM, was 10- to 100-fold greater than that of conventional PCR. Additionally, the LAMP assay coupled with an indicator (SYBR green or HNB facilitated analysis. These findings revealed that the rapid detection method was suitable as a simple field-based test to determine the status of GM crops.

  13. Event-related potential measures of gap detection threshold during natural sleep.

    Science.gov (United States)

    Muller-Gass, Alexandra; Campbell, Kenneth

    2014-08-01

    The minimum time interval between two stimuli that can be reliably detected is called the gap detection threshold. The present study examines whether an unconscious state, natural sleep affects the gap detection threshold. Event-related potentials were recorded in 10 young adults while awake and during all-night sleep to provide an objective estimate of this threshold. These subjects were presented with 2, 4, 8 or 16ms gaps occurring in 1.5 duration white noise. During wakefulness, a significant N1 was elicited for the 8 and 16ms gaps. N1 was difficult to observe during stage N2 sleep, even for the longest gap. A large P2 was however elicited and was significant for the 8 and 16ms gaps. Also, a later, very large N350 was elicited by the 16ms gap. An N1 and P2 was significant only for the 16ms gap during REM sleep. ERPs to gaps occurring in noise segments can therefore be successfully elicited during natural sleep. The gap detection threshold is similar in the waking and sleeping states. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Automated off-line respiratory event detection for the study of postoperative apnea in infants.

    Science.gov (United States)

    Aoude, Ahmed A; Kearney, Robert E; Brown, Karen A; Galiana, Henrietta L; Robles-Rubio, Carlos A

    2011-06-01

    Previously, we presented automated methods for thoraco-abdominal asynchrony estimation and movement artifact detection in respiratory inductance plethysmography (RIP) signals. This paper combines and improves these methods to give a method for the automated, off-line detection of pause, movement artifact, and asynchrony. Simulation studies demonstrated that the new combined method is accurate and robust in the presence of noise. The new procedure was successfully applied to cardiorespiratory signals acquired postoperatively from infants in the recovery room. A comparison of the events detected with the automated method to those visually scored by an expert clinician demonstrated a higher agreement (κ = 0.52) than that amongst several human scorers (κ = 0.31) in a clinical study . The method provides the following advantages: first, it is fully automated; second, it is more efficient than visual scoring; third, the analysis is repeatable and standardized; fourth, it provides greater agreement with an expert scorer compared to the agreement between trained scorers; fifth, it is amenable to online detection; and lastly, it is applicable to uncalibrated RIP signals. Examples of applications include respiratory monitoring of postsurgical patients and sleep studies.

  15. A Cluster-Based Fuzzy Fusion Algorithm for Event Detection in Heterogeneous Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    ZiQi Hao

    2015-01-01

    Full Text Available As limited energy is one of the tough challenges in wireless sensor networks (WSN, energy saving becomes important in increasing the lifecycle of the network. Data fusion enables combining information from several sources thus to provide a unified scenario, which can significantly save sensor energy and enhance sensing data accuracy. In this paper, we propose a cluster-based data fusion algorithm for event detection. We use k-means algorithm to form the nodes into clusters, which can significantly reduce the energy consumption of intracluster communication. Distances between cluster heads and event and energy of clusters are fuzzified, thus to use a fuzzy logic to select the clusters that will participate in data uploading and fusion. Fuzzy logic method is also used by cluster heads for local decision, and then the local decision results are sent to the base station. Decision-level fusion for final decision of event is performed by base station according to the uploaded local decisions and fusion support degree of clusters calculated by fuzzy logic method. The effectiveness of this algorithm is demonstrated by simulation results.

  16. Abnormal event detection in crowded scenes using two sparse dictionaries with saliency

    Science.gov (United States)

    Yu, Yaping; Shen, Wei; Huang, He; Zhang, Zhijiang

    2017-05-01

    Abnormal event detection in crowded scenes is a challenging problem due to the high density of the crowds and the occlusions between individuals. We propose a method using two sparse dictionaries with saliency to detect abnormal events in crowded scenes. By combining a multiscale histogram of optical flow (MHOF) and a multiscale histogram of oriented gradient (MHOG) into a multiscale histogram of optical flow and gradient, we are able to represent the feature of a spatial-temporal cuboid without separating the individuals in the crowd. While MHOF captures the temporal information, MHOG encodes both spatial and temporal information. The combination of these two features is able to represent the cuboid's appearance and motion characteristics even when the density of the crowds becomes high. An abnormal dictionary is added to the traditional sparse model with only a normal dictionary included. In addition, the saliency of the testing sample is combined with two sparse reconstruction costs on the normal and abnormal dictionary to measure the normalness of the testing sample. The experiment results show the effectiveness of our method.

  17. Pinda: a web service for detection and analysis of intraspecies gene duplication events.

    Science.gov (United States)

    Kontopoulos, Dimitrios-Georgios; Glykos, Nicholas M

    2013-09-01

    We present Pinda, a Web service for the detection and analysis of possible duplications of a given protein or DNA sequence within a source species. Pinda fully automates the whole gene duplication detection procedure, from performing the initial similarity searches, to generating the multiple sequence alignments and the corresponding phylogenetic trees, to bootstrapping the trees and producing a Z-score-based list of duplication candidates for the input sequence. Pinda has been cross-validated using an extensive set of known and bibliographically characterized duplication events. The service facilitates the automatic and dependable identification of gene duplication events, using some of the most successful bioinformatics software to perform an extensive analysis protocol. Pinda will prove of use for the analysis of newly discovered genes and proteins, thus also assisting the study of recently sequenced genomes. The service's location is http://orion.mbg.duth.gr/Pinda. The source code is freely available via https://github.com/dgkontopoulos/Pinda/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Human facial skin detection in thermal video to effectively measure electrodermal activity (EDA)

    Science.gov (United States)

    Kaur, Balvinder; Hutchinson, J. Andrew; Leonard, Kevin R.; Nelson, Jill K.

    2011-06-01

    In the past, autonomic nervous system response has often been determined through measuring Electrodermal Activity (EDA), sometimes referred to as Skin Conductance (SC). Recent work has shown that high resolution thermal cameras can passively and remotely obtain an analog to EDA by assessing the activation of facial eccrine skin pores. This paper investigates a method to distinguish facial skin from non-skin portions on the face to generate a skin-only Dynamic Mask (DM), validates the DM results, and demonstrates DM performance by removing false pore counts. Moreover, this paper shows results from these techniques using data from 20+ subjects across two different experiments. In the first experiment, subjects were presented with primary screening questions for which some had jeopardy. In the second experiment, subjects experienced standard emotion-eliciting stimuli. The results from using this technique will be shown in relation to data and human perception (ground truth). This paper introduces an automatic end-to-end skin detection approach based on texture feature vectors. In doing so, the paper contributes not only a new capability of tracking facial skin in thermal imagery, but also enhances our capability to provide non-contact, remote, passive, and real-time methods for determining autonomic nervous system responses for medical and security applications.

  19. Detecting regular sound changes in linguistics as events of concerted evolution.

    Science.gov (United States)

    Hruschka, Daniel J; Branford, Simon; Smith, Eric D; Wilkins, Jon; Meade, Andrew; Pagel, Mark; Bhattacharya, Tanmoy

    2015-01-05

    Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Detection of ULF geomagnetic signals associated with seismic events in Central Mexico using Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    O. Chavez

    2010-12-01

    Full Text Available The geomagnetic observatory of Juriquilla Mexico, located at longitude –100.45° and latitude 20.70°, and 1946 m a.s.l., has been operational since June 2004 compiling geomagnetic field measurements with a three component fluxgate magnetometer. In this paper, the results of the analysis of these measurements in relation to important seismic activity in the period of 2007 to 2009 are presented. For this purpose, we used superposed epochs of Discrete Wavelet Transform of filtered signals for the three components of the geomagnetic field during relative seismic calm, and it was compared with seismic events of magnitudes greater than Ms > 5.5, which have occurred in Mexico. The analysed epochs consisted of 18 h of observations for a dataset corresponding to 18 different earthquakes (EQs. The time series were processed for a period of 9 h prior to and 9 h after each seismic event. This data processing was compared with the same number of observations during a seismic calm. The proposed methodology proved to be an efficient tool to detect signals associated with seismic activity, especially when the seismic events occur in a distance (D from the observatory to the EQ, such that the ratio D/ρ < 1.8 where ρ is the earthquake radius preparation zone. The methodology presented herein shows important anomalies in the Ultra Low Frequency Range (ULF; 0.005–1 Hz, primarily for 0.25 to 0.5 Hz. Furthermore, the time variance (σ2 increases prior to, during and after the seismic event in relation to the coefficient D1 obtained, principally in the Bx (N-S and By (E-W geomagnetic components. Therefore, this paper proposes and develops a new methodology to extract the abnormal signals of the geomagnetic anomalies related to different stages of the EQs.

  1. Predictors of Arrhythmic Events Detected by Implantable Loop Recorders in Renal Transplant Candidates

    Directory of Open Access Journals (Sweden)

    Rodrigo Tavares Silva

    2015-11-01

    Full Text Available AbstractBackground:The recording of arrhythmic events (AE in renal transplant candidates (RTCs undergoing dialysis is limited by conventional electrocardiography. However, continuous cardiac rhythm monitoring seems to be more appropriate due to automatic detection of arrhythmia, but this method has not been used.Objective:We aimed to investigate the incidence and predictors of AE in RTCs using an implantable loop recorder (ILR.Methods:A prospective observational study conducted from June 2009 to January 2011 included 100 consecutive ambulatory RTCs who underwent ILR and were followed-up for at least 1 year. Multivariate logistic regression was applied to define predictors of AE.Results:During a mean follow-up of 424 ± 127 days, AE could be detected in 98% of patients, and 92% had more than one type of arrhythmia, with most considered potentially not serious. Sustained atrial tachycardia and atrial fibrillation occurred in 7% and 13% of patients, respectively, and bradyarrhythmia and non-sustained or sustained ventricular tachycardia (VT occurred in 25% and 57%, respectively. There were 18 deaths, of which 7 were sudden cardiac events: 3 bradyarrhythmias, 1 ventricular fibrillation, 1 myocardial infarction, and 2 undetermined. The presence of a long QTc (odds ratio [OR] = 7.28; 95% confidence interval [CI], 2.01–26.35; p = 0.002, and the duration of the PR interval (OR = 1.05; 95% CI, 1.02–1.08; p < 0.001 were independently associated with bradyarrhythmias. Left ventricular dilatation (LVD was independently associated with non-sustained VT (OR = 2.83; 95% CI, 1.01–7.96; p = 0.041.Conclusions:In medium-term follow-up of RTCs, ILR helped detect a high incidence of AE, most of which did not have clinical relevance. The PR interval and presence of long QTc were predictive of bradyarrhythmias, whereas LVD was predictive of non-sustained VT.

  2. Predictors of Arrhythmic Events Detected by Implantable Loop Recorders in Renal Transplant Candidates

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Rodrigo Tavares; Martinelli Filho, Martino, E-mail: martino@cardiol.br; Peixoto, Giselle de Lima; Lima, José Jayme Galvão de; Siqueira, Sérgio Freitas de; Costa, Roberto; Gowdak, Luís Henrique Wolff [Instituto do Coração do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, SP (Brazil); Paula, Flávio Jota de [Unidade de Transplante Renal - Divisão de Urologia do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, SP (Brazil); Kalil Filho, Roberto; Ramires, José Antônio Franchini [Instituto do Coração do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, SP (Brazil)

    2015-11-15

    The recording of arrhythmic events (AE) in renal transplant candidates (RTCs) undergoing dialysis is limited by conventional electrocardiography. However, continuous cardiac rhythm monitoring seems to be more appropriate due to automatic detection of arrhythmia, but this method has not been used. We aimed to investigate the incidence and predictors of AE in RTCs using an implantable loop recorder (ILR). A prospective observational study conducted from June 2009 to January 2011 included 100 consecutive ambulatory RTCs who underwent ILR and were followed-up for at least 1 year. Multivariate logistic regression was applied to define predictors of AE. During a mean follow-up of 424 ± 127 days, AE could be detected in 98% of patients, and 92% had more than one type of arrhythmia, with most considered potentially not serious. Sustained atrial tachycardia and atrial fibrillation occurred in 7% and 13% of patients, respectively, and bradyarrhythmia and non-sustained or sustained ventricular tachycardia (VT) occurred in 25% and 57%, respectively. There were 18 deaths, of which 7 were sudden cardiac events: 3 bradyarrhythmias, 1 ventricular fibrillation, 1 myocardial infarction, and 2 undetermined. The presence of a long QTc (odds ratio [OR] = 7.28; 95% confidence interval [CI], 2.01–26.35; p = 0.002), and the duration of the PR interval (OR = 1.05; 95% CI, 1.02–1.08; p < 0.001) were independently associated with bradyarrhythmias. Left ventricular dilatation (LVD) was independently associated with non-sustained VT (OR = 2.83; 95% CI, 1.01–7.96; p = 0.041). In medium-term follow-up of RTCs, ILR helped detect a high incidence of AE, most of which did not have clinical relevance. The PR interval and presence of long QTc were predictive of bradyarrhythmias, whereas LVD was predictive of non-sustained VT.

  3. Video surveillance using JPEG 2000

    Science.gov (United States)

    Dufaux, Frederic; Ebrahimi, Touradj

    2004-11-01

    This paper describes a video surveillance system which is composed of three key components, smart cameras, a server, and clients, connected through IP-networks in wired or wireless configurations. The system has been designed so as to protect the privacy of people under surveillance. Smart cameras are based on JPEG 2000 compression where an analysis module allows for events detection and regions of interest identification. The resulting regions of interest can then be encoded with better quality and scrambled. Compressed video streams are scrambled and signed for the purpose of privacy and data integrity verification using JPSEC compliant methods. The same bitstream may also be protected for robustness to transmission errors based on JPWL compliant methods. The server receives, stores, manages and transmits the video sequences on wired and wireless channels to a variety of clients and users with different device capabilities, channel characteristics and preferences. Use of seamless scalable coding of video sequences prevents any need for transcoding operations at any point in the system.

  4. Trimpi occurrence and geomagnetic activity: Analysis of events detected at Comandante Ferraz Brazilian Antarctic Station (L=2.25)

    OpenAIRE

    Fernandez, JH; Piazza, LR; Kaufmann, P

    2003-01-01

    [1] We present an analysis of the occurrence of Trimpi events observed at Comandante Ferraz Brazilian Antarctic Station (EACF), at L = 2.25, as observed by the amplitude of very low frequency (VLF) signals transmitted from Hawaii (NPM 21.4 kHz) from April 1996 to August 1999. The event parameters ( total duration, amplitude variation, time incidence, and type ( negative or positive)) were analyzed for 4394 events detected in the first year ( solar minimum and relatively low Trimpi activity). ...

  5. Detection Probability of Trends in Rare Events: Theory and Application to Heavy Precipitation in the Alpine Region.

    Science.gov (United States)

    Frei, Christoph; Schär, Christoph

    2001-04-01

    A statistical framework is presented for the assessment of climatological trends in the frequency of rare and extreme weather events. The methodology applies to long-term records of event counts and is based on the stochastic concept of binomial distributed counts. It embraces logistic regression for trend estimation and testing, and includes a quantification of the potential/limitation to discriminate a trend from the stochastic fluctuations in a record. This potential is expressed in terms of a detection probability, which is calculated from Monte Carlo-simulated surrogate records, and determined as a function of the record length, the magnitude of the trend and the average return period (i.e., the rarity) of events.Calculations of the detection probability for daily events reveal a strong sensitivity upon the rarity of events:in a 100-yr record of seasonal counts, a frequency change by a factor of 1.5 can be detected with a probability of 0.6 for events with an average return period of 30 days; however, this value drops to 0.2 for events with a return period of 100 days. For moderately rare events the detection probability decreases rapidly with shorter record length, but it does not significantly increase with longer record length when very rare events are considered. The results demonstrate the difficulty to determine trends of very rare events, underpin the need for long period data for trend analyses, and point toward a careful interpretation of statistically nonsignificant trend results.The statistical method is applied to examine seasonal trends of heavy daily precipitation at 113 rain gauge stations in the Alpine region of Switzerland (1901-94). For intense events (return period: 30 days) a statistically significant frequency increase was found in winter and autumn for a high number of stations. For strong precipitation events (return period larger than 100 days), trends are mostly statistically nonsignificant, which does not necessarily imply the absence

  6. Towards Real-Time Detection of Gait Events on Different Terrains Using Time-Frequency Analysis and Peak Heuristics Algorithm.

    Science.gov (United States)

    Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin

    2016-10-01

    Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.

  7. Towards Real-Time Detection of Gait Events on Different Terrains Using Time-Frequency Analysis and Peak Heuristics Algorithm

    Directory of Open Access Journals (Sweden)

    Hui Zhou

    2016-10-01

    Full Text Available Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO and heel strike (HS gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.

  8. Long-term accelerometry-triggered video monitoring and detection of tonic-clonic and clonic seizures in a home environment: Pilot study.

    Science.gov (United States)

    Van de Vel, Anouk; Milosevic, Milica; Bonroy, Bert; Cuppens, Kris; Lagae, Lieven; Vanrumste, Bart; Van Huffel, Sabine; Ceulemans, Berten

    2016-01-01

    The aim of our study was to test the efficacy of the VARIA system (video, accelerometry, and radar-induced activity recording) and validation of accelerometry-based detection algorithms for nocturnal tonic-clonic and clonic seizures developed by our team. We present the results of two patients with tonic-clonic and clonic seizures, measured for about one month in a home environment with four wireless accelerometers (ACM) attached to wrists and ankles. The algorithms were developed using wired ACM data synchronized with the gold standard video-/electroencephalography (EEG) and then run offline on the wireless ACM signals. Detection of seizures was compared with semicontinuous monitoring by professional caregivers (keeping an eye on multiple patients). The best result for the two patients was obtained with the semipatient-specific algorithm which was developed using all patients with tonic-clonic and clonic seizures in our database with wired ACM. It gave a mean sensitivity of 66.87% and false detection rate of 1.16 per night. This included 13 extra seizures detected (31%) compared with professional caregivers' observations. While the algorithms were previously validated in a controlled video/EEG monitoring unit with wired sensors, we now show the first results of long-term, wireless testing in a home environment.

  9. Solar Power Ramp Events Detection Using an Optimized Swinging Door Algorithm: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-07

    Solar power ramp events (SPREs) are those that significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.

  10. Ultra-Low Power Sensor System for Disaster Event Detection in Metro Tunnel Systems

    Directory of Open Access Journals (Sweden)

    Jonah VINCKE

    2017-05-01

    Full Text Available In this extended paper, the concept for an ultra-low power wireless sensor network (WSN for underground tunnel systems is presented highlighting the chosen sensors. Its objectives are the detection of emergency events either from natural disasters, such as flooding or fire, or from terrorist attacks using explosives. Earlier works have demonstrated that the power consumption for the communication can be reduced such that the data acquisition (i.e. sensor sub-system becomes the most significant energy consumer. By using ultra-low power components for the smoke detector, a hydrostatic pressure sensor for water ingress detection and a passive acoustic emission sensor for explosion detection, all considered threats are covered while the energy consumption can be kept very low in relation to the data acquisition. In addition to 1 the sensor system is integrated into a sensor board. The total average power consumption for operating the sensor sub-system is measured to be 35.9 µW for lower and 7.8 µW for upper nodes.

  11. Automatic Detection of Pitching and Throwing Events in Baseball With Inertial Measurement Sensors.

    Science.gov (United States)

    Murray, Nick B; Black, Georgia M; Whiteley, Rod J; Gahan, Peter; Cole, Michael H; Utting, Andy; Gabbett, Tim J

    2017-04-01

    Throwing loads are known to be closely related to injury risk. However, for logistic reasons, typically only pitchers have their throws counted, and then only during innings. Accordingly, all other throws made are not counted, so estimates of throws made by players may be inaccurately recorded and underreported. A potential solution to this is the use of wearable microtechnology to automatically detect, quantify, and report pitch counts in baseball. This study investigated the accuracy of detection of baseball pitching and throwing in both practice and competition using a commercially available wearable microtechnology unit. Seventeen elite youth baseball players (mean ± SD age 16.5 ± 0.8 y, height 184.1 ± 5.5 cm, mass 78.3 ± 7.7 kg) participated in this study. Participants performed pitching, fielding, and throwing during practice and competition while wearing a microtechnology unit. Sensitivity and specificity of a pitching and throwing algorithm were determined by comparing automatic measures (ie, microtechnology unit) with direct measures (ie, manually recorded pitching counts). The pitching and throwing algorithm was sensitive during both practice (100%) and competition (100%). Specificity was poorer during both practice (79.8%) and competition (74.4%). These findings demonstrate that the microtechnology unit is sensitive to detect pitching and throwing events, but further development of the pitching algorithm is required to accurately and consistently quantify throwing loads using microtechnology.

  12. Multimodal Semantics Extraction from User-Generated Videos

    Directory of Open Access Journals (Sweden)

    Francesco Cricri

    2012-01-01

    Full Text Available User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium, genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.

  13. Medan Video Game Center (High Tech Architecture)

    OpenAIRE

    Roni,

    2014-01-01

    Medan Video Game Center construction is intended to facilitate the people who are enthusiast about video game in Medan. This building also can be a place for organized eventevent that is related to video game such as video game exhibition, or video game competition. Besides that, Medan Video Game Center construction also as education place which there is contain a video game academy and vehicle simulator room. The building design use double skin façade concept that highlights the supportin...

  14. Successful syllable detection in aphasia despite processing impairments as revealed by event-related potentials

    Directory of Open Access Journals (Sweden)

    Becker Frank

    2007-01-01

    Full Text Available Abstract Background The role of impaired sound and speech sound processing for auditory language comprehension deficits in aphasia is unclear. No electrophysiological studies of attended speech sound processing in aphasia have been performed for stimuli that are discriminable even for patients with severe auditory comprehension deficits. Methods Event-related brain potentials (ERPs were used to study speech sound processing in a syllable detection task in aphasia. In an oddball paradigm, the participants had to detect the infrequent target syllable /ta:/ amongst the frequent standard syllable /ba:/. 10 subjects with moderate and 10 subjects with severe auditory comprehension impairment were compared to 11 healthy controls. Results N1 amplitude was reduced indicating impaired primary stimulus analysis; N1 reduction was a predictor for auditory comprehension impairment. N2 attenuation suggests reduced attended stimulus classification and discrimination. However, all aphasic patients were able to discriminate the stimuli almost without errors, and processes related to the target identification (P3 were not significantly reduced. The aphasic subjects might have discriminated the stimuli by purely auditory differences, while the ERP results reveal a reduction of language-related processing which however did not prevent performing the task. Topographic differences between aphasic subgroups and controls indicate compensatory changes in activation. Conclusion Stimulus processing in early time windows (N1, N2 is altered in aphasics with adverse consequences for auditory comprehension of complex language material, while allowing performance of simpler tasks (syllable detection. Compensational patterns of speech sound processing may be activated in syllable detection, but may not be functional in more complex tasks. The degree to which compensational processes can be activated probably varies depending on factors as lesion site, time after injury, and

  15. First events from the CNGS neutrino beam detected in the OPERA experiment

    CERN Document Server

    Acquafredda, R.; Ambrosio, M.; Anokhina, A.; Aoki, S.; Ariga, A.; Arrabito, L.; Autiero, D.; Badertscher, A.; Bergnoli, A.; Bersani Greggio, F.; Besnier, M.; Beyer, M.; Bondil-Blin, S.; Borer, K.; Boucrot, J.; Boyarkin, V.; Bozza, C.; Brugnera, R.; Buontempo, S.; Caffari, Y.; Campagne, Jean-Eric; Carlus, B.; Carrara, E.; Cazes, A.; Chaussard, L.; Chernyavsky, M.; Chiarella, V.; Chon-Sen, N.; Chukanov, A.; Ciesielski, R.; Consiglio, L.; Cozzi, M.; Dal Corso, F.; D'Ambrosio, N.; Damet, J.; De Lellis, G.; Declais, Y.; Descombes, T.; De Serio, M.; Di Capua, F.; Di Ferdinando, D.; Di Giovanni, A.; Di Marco, N.; Di Troia, C.; Dmitrievski, S.; Dracos, M.; Duchesneau, D.; Dulach, B.; Dusini, S.; Ebert, J.; Enikeev, R.; Ereditato, A.; Esposito, L.S.; Fanin, C.; Favier, J.; Felici, G.; Ferber, T.; Fournier, L.; Franceschi, A.; Frekers, D.; Fukuda, T.; Fukushima, C.; Galkin, V.I.; Galkin, V.A.; Gallet, R.; Garfagnini, A.; Gaudiot, G.; Giacomelli, G.; Giarmana, O.; Giorgini, M.; Girard, L.; Girerd, C.; Goellnitz, C.; Goldberg, J.; Gornoushkin, Y.; Grella, G.; Grianti, F.; Guerin, C.; Guler, M.; Gustavino, C.; Hagner, C.; Hamane, T.; Hara, T.; Hauger, M.; Hess, M.; Hoshino, K.; Ieva, M.; Incurvati, M.; Jakovcic, K.; Janicsko Csathy, J.; Janutta, B.; Jollet, C.; Juget, F.; Kazuyama, M.; Kim, S.H.; Kimura, M.; Knuesel, J.; Kodama, K.; Kolev, D.; Komatsu, M.; Kose, U.; Krasnoperov, A.; Kreslo, I.; Krumstein, Z.; Laktineh, I.; de La Taille, C.; Le Flour, T.; Lieunard, S.; Ljubicic, A.; Longhin, A.; Malgin, A.; Manai, K.; Mandrioli, G.; Mantello, U.; Marotta, A.; Marteau, J.; Martin-Chassard, G.; Matveev, V.; Messina, M.; Meyer, L.; Micanovic, S.; Migliozzi, P.; Miyamoto, S.; Monacelli, Piero; Monteiro, I.; Morishima, K.; Moser, U.; Muciaccia, M.T.; Mugnier, P.; Naganawa, N.; Nakamura, M.; Nakano, T.; Napolitano, T.; Natsume, M.; Niwa, K.; Nonoyama, Y.; Nozdrin, A.; Ogawa, S.; Olchevski, A.; Orlandi, D.; Ossetski, D.; Paoloni, A.; Park, B.D.; Park, I.G.; Pastore, A.; Patrizii, L.; Pellegrino, L.; Pessard, H.; Pilipenko, V.; Pistillo, C.; Polukhina, N.; Pozzato, M.; Pretzl, K.; Publichenko, P.; Raux, L.; Repellin, J.P.; Roganova, T.; Romano, G.; Rosa, G.; Rubbia, A.; Ryasny, V.; Ryazhskaya, O.; Ryzhikov, D.; Sadovski, A.; Sanelli, C.; Sato, O.; Sato, Y.; Saveliev, V.; Savvinov, N.; Sazhina, G.; Schembri, A.; Schmidt Parzefall, W.; Schroeder, H.; Schutz, H.U.; Scotto Lavina, L.; Sewing, J.; Shibuya, H.; Simone, S.; Sioli, M.; Sirignano, C.; Sirri, G.; Song, J.S.; Spaeti, R.; Spinetti, M.; Stanco, L.; Starkov, N.; Stipcevic, M.; Strolin, Paolo Emilio; Sugonyaev, V.; Takahashi, S.; Tereschenko, V.; Terranova, F.; Tezuka, I.; Tioukov, V.; Tikhomirov, I.; Tolun, P.; Toshito, T.; Tsarev, V.; Tsenov, R.; Ugolino, U.; Ushida, N.; Van Beek, G.; Verguilov, V.; Vilain, P.; Votano, L.; Vuilleumier, J.L.; Waelchli, T.; Waldi, R.; Weber, M.; Wilquet, G.; Wonsak, B.; Wurth, R.; Wurtz, J.; Yakushev, V.; Yoon, C.S.; Zaitsev, Y.; Zamboni, I.; Zimmerman, R.

    2006-01-01

    The OPERA neutrino detector at the underground Gran Sasso Laboratory (LNGS) was designed to perform the first detection of neutrino oscillations in appearance mode, through the study of nu_mu to nu_tau oscillations. The apparatus consists of a lead/emulsion-film target complemented by electronic detectors. It is placed in the high-energy, long-baseline CERN to LNGS beam (CNGS) 730 km away from the neutrino source. In August 2006 a first run with CNGS neutrinos was successfully conducted. A first sample of neutrino events was collected, statistically consistent with the integrated beam intensity. After a brief description of the beam and of the various sub-detectors, we report on the achievement of this milestone, presenting the first data and some analysis results.

  16. The Event Detection and the Apparent Velocity Estimation Based on Computer Vision

    Science.gov (United States)

    Shimojo, M.

    2012-08-01

    The high spatial and time resolution data obtained by the telescopes aboard Hinode revealed the new interesting dynamics in solar atmosphere. In order to detect such events and estimate the velocity of dynamics automatically, we examined the estimation methods of the optical flow based on the OpenCV that is the computer vision library. We applied the methods to the prominence eruption observed by NoRH, and the polar X-ray jet observed by XRT. As a result, it is clear that the methods work well for solar images if the images are optimized for the methods. It indicates that the optical flow estimation methods in the OpenCV library are very useful to analyze the solar phenomena.

  17. Detection of events of public health importance under the international health regulations: a toolkit to improve reporting of unusual events by frontline healthcare workers.

    Science.gov (United States)

    MacDonald, Emily; Aavitsland, Preben; Bitar, Dounia; Borgen, Katrine

    2011-09-21

    The International Health Regulations (IHR (2005)) require countries to notify WHO of any event which may constitute a public health emergency of international concern. This notification relies on reports of events occurring at the local level reaching the national public health authorities. By June 2012 WHO member states are expected to have implemented the capacity to "detect events involving disease or death above expected levels for the particular time and place" on the local level and report essential information to the appropriate level of public health authority. Our objective was to develop tools to assist European countries improve the reporting of unusual events of public health significance from frontline healthcare workers to public health authorities. We investigated obstacles and incentives to event reporting through a systematic literature review and expert consultations with national public health officials from various European countries. Multi-day expert meetings and qualitative interviews were used to gather experiences and examples of public health event reporting. Feedback on specific components of the toolkit was collected from healthcare workers and public health officials throughout the design process. Evidence from 79 scientific publications, two multi-day expert meetings and seven qualitative interviews stressed the need to clarify concepts and expectations around event reporting in European countries between the frontline and public health authorities. An analytical framework based on three priority areas for improved event reporting (professional engagement, communication and infrastructure) was developed and guided the development of the various tools. We developed a toolkit adaptable to country-specific needs that includes a guidance document for IHR National Focal Points and nine tool templates targeted at clinicians and laboratory staff: five awareness campaign tools, three education and training tools, and an implementation plan. The

  18. KIWI: A technology for public health event monitoring and early warning signal detection.

    Science.gov (United States)

    Mukhi, Shamir N

    2016-01-01

    To introduce the Canadian Network for Public Health Intelligence's new Knowledge Integration using Web-based Intelligence (KIWI) technology, and to pefrom preliminary evaluation of the KIWI technology using a case study. The purpose of this new technology is to support surveillance activities by monitoring unstructured data sources for the early detection and awareness of potential public health threats. A prototype of the KIWI technology, adapted for zoonotic and emerging diseases, was piloted by end-users with expertise in the field of public health and zoonotic/emerging disease surveillance. The technology was assessed using variables such as geographic coverage, user participation, and others; categorized by high-level attributes from evaluation guidelines for internet based surveillance systems. Special attention was given to the evaluation of the system's automated sense-making algorithm, which used variables such as sensitivity, specificity, and predictive values. Event-based surveillance evaluation was not applied to its full capacity as such an evaluation is beyond the scope of this paper. KIWI was piloted with user participation = 85.0% and geographic coverage within monitored sources = 83.9% of countries. The pilots, which focused on zoonotic and emerging diseases, lasted a combined total of 65 days and resulted in the collection of 3243 individual information pieces (IIP) and 2 community reported events (CRE) for processing. Ten sources were monitored during the second phase of the pilot, which resulted in 545 anticipatory intelligence signals (AIS). KIWI's automated sense-making algorithm (SMA) had sensitivity = 63.9% (95% CI: 60.2-67.5%), specificity = 88.6% (95% CI: 87.3-89.8%), positive predictive value = 59.8% (95% CI: 56.1-63.4%), and negative predictive value = 90.3% (95% CI: 89.0-91.4%). Literature suggests the need for internet based monitoring and surveillance systems that are customizable, integrated into collaborative networks of public

  19. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  20. Microseismic events enhancement and detection in sensor arrays using autocorrelation-based filtering

    Science.gov (United States)

    Liu, Entao; Zhu, Lijun; Govinda Raj, Anupama; McClellan, James H.; Al-Shuhail, Abdullatif; Kaka, SanLinn I.; Iqbal, Naveed

    2017-11-01

    Passive microseismic data are commonly buried in noise, which presents a significant challenge for signal detection and recovery. For recordings from a surface sensor array where each trace contains a time-delayed arrival from the event, we propose an autocorrelation-based stacking method that designs a denoising filter from all the traces, as well as a multi-channel detection scheme. This approach circumvents the issue of time aligning the traces prior to stacking because every trace's autocorrelation is centered at zero in the lag domain. The effect of white noise is concentrated near zero lag, so the filter design requires a predictable adjustment of the zero-lag value. Truncation of the autocorrelation is employed to smooth the impulse response of the denoising filter. In order to extend the applicability of the algorithm, we also propose a noise prewhitening scheme that addresses cases with colored noise. The simplicity and robustness of this method are validated with synthetic and real seismic traces.

  1. Fully Autonomous Multiplet Event Detection: Application to Local-Distance Monitoring of Blood Falls Seismicity

    Energy Technology Data Exchange (ETDEWEB)

    Carmichael, Joshua Daniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Carr, Christina [Univ. of Alaska, Fairbanks, AK (United States); Pettit, Erin C. [Univ. of Alaska, Fairbanks, AK (United States)

    2015-06-18

    We apply a fully autonomous icequake detection methodology to a single day of high-sample rate (200 Hz) seismic network data recorded from the terminus of Taylor Glacier, ANT that temporally coincided with a brine release episode near Blood Falls (May 13, 2014). We demonstrate a statistically validated procedure to assemble waveforms triggered by icequakes into populations of clusters linked by intra-event waveform similarity. Our processing methodology implements a noise-adaptive power detector coupled with a complete-linkage clustering algorithm and noise-adaptive correlation detector. This detector-chain reveals a population of 20 multiplet sequences that includes ~150 icequakes and produces zero false alarms on the concurrent, diurnally variable noise. Our results are very promising for identifying changes in background seismicity associated with the presence or absence of brine release episodes. We thereby suggest that our methodology could be applied to longer time periods to establish a brine-release monitoring program for Blood Falls that is based on icequake detections.

  2. High definition colonoscopy combined with i-Scan is superior in the detection of colorectal neoplasias compared with standard video colonoscopy: a prospective randomized controlled trial.

    Science.gov (United States)

    Hoffman, A; Sar, F; Goetz, M; Tresch, A; Mudter, J; Biesterfeld, S; Galle, P R; Neurath, M F; Kiesslich, R

    2010-10-01

    Colonoscopy is the accepted gold standard for the detection of colorectal cancer. The aim of the current study was to prospectively compare high definition plus (HD+) colonoscopy with I-Scan functionality (electronic staining) vs. standard video colonoscopy. The primary endpoint was the detection of patients having colon cancer or at least one adenoma. A total of 220 patients due to undergo screening colonoscopy, postpolypectomy surveillance or with a positive occult blood test were randomized in a 1 : 1 ratio to undergo HD+ colonoscopy in conjunction with I-Scan surface enhancement (90i series, Pentax, Tokyo, Japan) or standard video colonoscopy (EC-3870FZK, Pentax). Detected colorectal lesions were judged according to type, location, and size. Lesions were characterized in the HD+ group by using further I-Scan functionality (p- and v-modes) to analyze pattern and vessel architecture. Histology was predicted and biopsies or resections were performed on all identified lesions. HD+ colonoscopy with I-Scan functionality detected significantly more patients with colorectal neoplasia (38 %) compared with standard resolution endoscopy (13 %) (200 patients finally analyzed; 100 per arm). Significantly more neoplastic (adenomatous and cancerous) lesions and more flat adenomas could be detected using high definition endoscopy with surface enhancement. Final histology could be predicted with high accuracy (98.6 %) within the HD+ group. HD+ colonoscopy with I-Scan is superior to standard video colonoscopy in detecting patients with colorectal neoplasia based on this prospective, randomized, controlled trial. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Detecting Forest Disturbance Events from MODIS and Landsat Time Series for the Conterminous United States

    Science.gov (United States)

    Zhang, G.; Ganguly, S.; Saatchi, S. S.; Hagen, S. C.; Harris, N.; Yu, Y.; Nemani, R. R.

    2013-12-01

    Spatial and temporal patterns of forest disturbance and regrowth processes are key for understanding aboveground terrestrial vegetation biomass and carbon stocks at regional-to-continental scales. The NASA Carbon Monitoring System (CMS) program seeks key input datasets, especially information related to impacts due to natural/man-made disturbances in forested landscapes of Conterminous U.S. (CONUS), that would reduce uncertainties in current carbon stock estimation and emission models. This study provides a end-to-end forest disturbance detection framework based on pixel time series analysis from MODIS (Moderate Resolution Imaging Spectroradiometer) and Landsat surface spectral reflectance data. We applied the BFAST (Breaks for Additive Seasonal and Trend) algorithm to the Normalized Difference Vegetation Index (NDVI) data for the time period from 2000 to 2011. A harmonic seasonal model was implemented in BFAST to decompose the time series to seasonal and interannual trend components in order to detect abrupt changes in magnitude and direction of these components. To apply the BFAST for whole CONUS, we built a parallel computing setup for processing massive time-series data using the high performance computing facility of the NASA Earth Exchange (NEX). In the implementation process, we extracted the dominant deforestation events from the magnitude of abrupt changes in both seasonal and interannual components, and estimated dates for corresponding deforestation events. We estimated the recovery rate for deforested regions through regression models developed between NDVI values and time since disturbance for all pixels. A similar implementation of the BFAST algorithm was performed over selected Landsat scenes (all Landsat cloud free data was used to generate NDVI from atmospherically corrected spectral reflectances) to demonstrate the spatial coherence in retrieval layers between MODIS and Landsat. In future, the application of this largely parallel disturbance

  4. The power to detect recent fragmentation events using genetic differentiation methods.

    Directory of Open Access Journals (Sweden)

    Michael W Lloyd

    Full Text Available Habitat loss and fragmentation are imminent threats to biological diversity worldwide and thus are fundamental issues in conservation biology. Increased isolation alone has been implicated as a driver of negative impacts in populations associated with fragmented landscapes. Genetic monitoring and the use of measures of genetic divergence have been proposed as means to detect changes in landscape connectivity. Our goal was to evaluate the sensitivity of Wright's F st, Hedrick' G'st , Sherwin's MI, and Jost's D to recent fragmentation events across a range of population sizes and sampling regimes. We constructed an individual-based model, which used a factorial design to compare effects of varying population size, presence or absence of overlapping generations, and presence or absence of population sub-structuring. Increases in population size, overlapping generations, and population sub-structuring each reduced F st, G'st , MI, and D. The signal of fragmentation was detected within two generations for all metrics. However, the magnitude of the change in each was small in all cases, and when N e was >100 individuals it was extremely small. Multi-generational sampling and population estimates are required to differentiate the signal of background divergence from changes in Fst , G'st , MI, and D associated with fragmentation. Finally, the window during which rapid change in Fst , G'st , MI, and D between generations occurs can be small, and if missed would lead to inconclusive results. For these reasons, use of F st, G'st , MI, or D for detecting and monitoring changes in connectivity is likely to prove difficult in real-world scenarios. We advocate use of genetic monitoring only in conjunction with estimates of actual movement among patches such that one could compare current movement with the genetic signature of past movement to determine there has been a change.

  5. THE DETECTION OF A SN IIn IN OPTICAL FOLLOW-UP OBSERVATIONS OF ICECUBE NEUTRINO EVENTS

    Energy Technology Data Exchange (ETDEWEB)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Altmann, D.; Anderson, T.; Archinger, M.; Arguelles, C.; Arlen, T. C.; Auffenberg, J.; Bai, X.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H. [Astrophysics Research Centre, School of Mathematics and Physics, Queen' s University Belfast, Belfast, BT7 1NN (United Kingdom); Collaboration: IceCube Collaboration; for the PTF Collaboration; for the Swift Collaboration; for the Pan-STARRS1 Science Consortium; and others

    2015-09-20

    The IceCube neutrino observatory pursues a follow-up program selecting interesting neutrino events in real-time and issuing alerts for electromagnetic follow-up observations. In 2012 March, the most significant neutrino alert during the first three years of operation was issued by IceCube. In the follow-up observations performed by the Palomar Transient Factory (PTF), a Type IIn supernova (SN IIn) PTF12csy was found 0.°2 away from the neutrino alert direction, with an error radius of 0.°54. It has a redshift of z = 0.0684, corresponding to a luminosity distance of about 300 Mpc and the Pan-STARRS1 survey shows that its explosion time was at least 158 days (in host galaxy rest frame) before the neutrino alert, so that a causal connection is unlikely. The a posteriori significance of the chance detection of both the neutrinos and the SN at any epoch is 2.2σ within IceCube's 2011/12 data acquisition season. Also, a complementary neutrino analysis reveals no long-term signal over the course of one year. Therefore, we consider the SN detection coincidental and the neutrinos uncorrelated to the SN. However, the SN is unusual and interesting by itself: it is luminous and energetic, bearing strong resemblance to the SN IIn 2010jl, and shows signs of interaction of the SN ejecta with a dense circumstellar medium. High-energy neutrino emission is expected in models of diffusive shock acceleration, but at a low, non-detectable level for this specific SN. In this paper, we describe the SN PTF12csy and present both the neutrino and electromagnetic data, as well as their analysis.

  6. The Power to Detect Recent Fragmentation Events Using Genetic Differentiation Methods

    Science.gov (United States)

    Lloyd, Michael W.; Campbell, Lesley; Neel, Maile C.

    2013-01-01

    Habitat loss and fragmentation are imminent threats to biological diversity worldwide and thus are fundamental issues in conservation biology. Increased isolation alone has been implicated as a driver of negative impacts in populations associated with fragmented landscapes. Genetic monitoring and the use of measures of genetic divergence have been proposed as means to detect changes in landscape connectivity. Our goal was to evaluate the sensitivity of Wright’s Fst, Hedrick’ G’st, Sherwin’s MI, and Jost’s D to recent fragmentation events across a range of population sizes and sampling regimes. We constructed an individual-based model, which used a factorial design to compare effects of varying population size, presence or absence of overlapping generations, and presence or absence of population sub-structuring. Increases in population size, overlapping generations, and population sub-structuring each reduced Fst, G’st, MI, and D. The signal of fragmentation was detected within two generations for all metrics. However, the magnitude of the change in each was small in all cases, and when Ne was >100 individuals it was extremely small. Multi-generational sampling and population estimates are required to differentiate the signal of background divergence from changes in Fst, G’st, MI, and D associated with fragmentation. Finally, the window during which rapid change in Fst, G’st, MI, and D between generations occurs can be small, and if missed would lead to inconclusive results. For these reasons, use of Fst, G’st, MI, or D for detecting and monitoring changes in connectivity is likely to prove difficult in real-world scenarios. We advocate use of genetic monitoring only in conjunction with estimates of actual movement among patches such that one could compare current movement with the genetic signature of past movement to determine there has been a change. PMID:23704965

  7. Markov Switching Model for Quick Detection of Event Related Desynchronization in EEG

    Directory of Open Access Journals (Sweden)

    Giuseppe Lisi

    2018-02-01

    Full Text Available Quick detection of motor intentions is critical in order to minimize the time required to activate a neuroprosthesis. We propose a Markov Switching Model (MSM to achieve quick detection of an event related desynchronization (ERD elicited by motor imagery (MI and recorded by electroencephalography (EEG. Conventional brain computer interfaces (BCI rely on sliding window classifiers in order to perform online continuous classification of the rest vs. MI classes. Based on this approach, the detection of abrupt changes in the sensorimotor power suffers from an intrinsic delay caused by the necessity of computing an estimate of variance across several tenths of a second. Here we propose to avoid explicitly computing the EEG signal variance, and estimate the ERD state directly from the voltage information, in order to reduce the detection latency. This is achieved by using a model suitable in situations characterized by abrupt changes of state, the MSM. In our implementation, the model takes the form of a Gaussian observation model whose variance is governed by two latent discrete states with Markovian dynamics. Its objective is to estimate the brain state (i.e., rest vs. ERD given the EEG voltage, spatially filtered by common spatial pattern (CSP, as observation. The two variances associated with the two latent states are calibrated using the variance of the CSP projection during rest and MI, respectively. The transition matrix of the latent states is optimized by the “quickest detection” strategy that minimizes a cost function of detection latency and false positive rate. Data collected by a dry EEG system from 50 healthy subjects, was used to assess performance and compare the MSM with several logistic regression classifiers of different sliding window lengths. As a result, the MSM achieves a significantly better tradeoff between latency, false positive and true positive rates. The proposed model could be used to achieve a more reactive and

  8. Electromyography-based seizure detector: Preliminary results comparing a generalized tonic-clonic seizure detection algorithm to video-EEG recordings.

    Science.gov (United States)

    Szabó, Charles Ákos; Morgan, Lola C; Karkar, Kameel M; Leary, Linda D; Lie, Octavian V; Girouard, Michael; Cavazos, José E

    2015-09-01

    Automatic detection of generalized tonic-clonic seizures (GTCS) will facilitate patient monitoring and early intervention to prevent comorbidities, recurrent seizures, or death. Brain Sentinel (San Antonio, Texas, USA) developed a seizure-detection algorithm evaluating surface electromyography (sEMG) signals during GTCS. This study aims to validate the seizure-detection algorithm using inpatient video-electroencephalography (EEG) monitoring. sEMG was recorded unilaterally from the biceps/triceps muscles in 33 patients (17white/16 male) with a mean age of 40 (range 14-64) years who were admitted for video-EEG monitoring. Maximum voluntary biceps contraction was measured in each patient to set up the baseline physiologic muscle threshold. The raw EMG signal was recorded using conventional amplifiers, sampling at 1,024 Hz and filtered with a 60 Hz noise detection algorithm before it was processed with three band-pass filters at pass frequencies of 3-40, 130-240, and 300-400 Hz. A seizure-detection algorithm utilizing Hotelling's T-squared power analysis of compound muscle action potentials was used to identify GTCS and correlated with video-EEG recordings. In 1,399 h of continuous recording, there were 196 epileptic seizures (21 GTCS, 96 myoclonic, 28 tonic, 12 absence, and 42 focal seizures with or without loss of awareness) and 4 nonepileptic spells. During retrospective, offline evaluation of sEMG from the biceps alone, the algorithm detected 20 GTCS (95%) in 11 patients, averaging within 20 s of electroclinical onset of generalized tonic activity, as identified by video-EEG monitoring. Only one false-positive detection occurred during the postictal period following a GTCS, but false alarms were not triggered by other seizure types or spells. Brain Sentinel's seizure detection algorithm demonstrated excellent sensitivity and specificity for identifying GTCS recorded in an epilepsy monitoring unit. Further studies are needed in larger patient groups, including

  9. A signal detection method for temporal variation of adverse effect with vaccine adverse event reporting system data.

    Science.gov (United States)

    Cai, Yi; Du, Jingcheng; Huang, Jing; Ellenberg, Susan S; Hennessy, Sean; Tao, Cui; Chen, Yong

    2017-07-05

    To identify safety signals by manual review of individual report in large surveillance databases is time consuming; such an approach is very unlikely to reveal complex relationships between medications and adverse events. Since the late 1990s, efforts have been made to develop data mining tools to systematically and automatically search for safety signals in surveillance databases. Influenza vaccines present special challenges to safety surveillance because the vaccine changes every year in response to the influenza strains predicted to be prevalent that year. Therefore, it may be expected that reporting rates of adverse events following flu vaccines (number of reports for a specific vaccine-event combination/number of reports for all vaccine-event combinations) may vary substantially across reporting years. Current surveillance methods seldom consider these variations in signal detection, and reports from different years are typically collapsed together to conduct safety analyses. However, merging reports from different years ignores the potential heterogeneity of reporting rates across years and may miss important safety signals. Reports of adverse events between years 1990 to 2013 were extracted from the Vaccine Adverse Event Reporting System (VAERS) database and formatted into a three-dimensional data array with types of vaccine, groups of adverse events and reporting time as the three dimensions. We propose a random effects model to test the heterogeneity of reporting rates for a given vaccine-event combination across reporting years. The proposed method provides a rigorous statistical procedure to detect differences of reporting rates among years. We also introduce a new visualization tool to summarize the result of the proposed method when applied to multiple vaccine-adverse event combinations. We applied the proposed method to detect safety signals of FLU3, an influenza vaccine containing three flu strains, in the VAERS database. We showed that it had high

  10. Automatic seismic event detection using migration and stacking: a performance and parameter study in Hengill, southwest Iceland

    Science.gov (United States)

    Wagner, F.; Tryggvason, A.; Roberts, R.; Lund, B.; Gudmundsson, Ó.

    2017-06-01

    We investigate the performance of a seismic event detection algorithm using migration and stacking of seismic traces. The focus lies on determining optimal data dependent detection parameters for a data set from a temporary network in the volcanically active Hengill area, southwest Iceland. We test variations of the short-term average to long-term average and Kurtosis functions, calculated from filtered seismic traces, as input data. With optimal detection parameters, our algorithm identified 94 per cent (219 events) of the events detected by the South Iceland Lowlands (SIL) system, that is, the automatic system routinely used on Iceland, as well as a further 209 events, previously missed. The assessed number of incorrect (false) detections was 25 per cent for our algorithm, which was considerably better than that from SIL (40 per cent). Empirical tests show that well-functioning processing parameters can be effectively selected based on analysis of small, representative subsections of data. Our migration approach is more computationally expensive than some alternatives, but not prohibitively so, and it appears well suited to analysis of large swarms of low magnitude events with interevent times on the order of seconds. It is, therefore, an attractive, practical tool for monitoring of natural or anthropogenic seismicity related to, for example, volcanoes, drilling or fluid injection.

  11. FOREWORD: 3rd Symposium on Large TPCs for Low Energy Event Detection

    Science.gov (United States)

    Irastorza, Igor G.; Colas, Paul; Gorodetzky, Phillippe

    2007-05-01

    The Third International Symposium on large TPCs for low-energy rare-event detection was held at Carré des sciences, Poincaré auditorium, 25 rue de la Montagne Ste Geneviève in Paris on 11 12 December 2006. This prestigious location belonging to the Ministry of Research is hosted in the former Ecole Polytechnique. The meeting, held in Paris every two years, gathers a significant community of physicists involved in rare event detection. Its purpose is an extensive discussion of present and future projects using large TPCs for low energy, low background detection of rare events (low-energy neutrinos, dark matter, solar axions). The use of a new generation of Micro-Pattern Gaseous Detectors (MPGD) appears to be a promising way to reach this goal. The program this year was enriched by a new session devoted to the detection challenge of polarized gamma rays, relevant novel experimental techniques and the impact on particle physics, astrophysics and astronomy. A very particular feature of this conference is the large variety of talks ranging from purely theoretical to purely experimental subjects including novel technological aspects. This allows discussion and exchange of useful information and new ideas that are emerging to address particle physics experimental challenges. The scientific highlights at the Symposium came on many fronts: Status of low-energy neutrino physics and double-beta decay New ideas on double-beta decay experiments Gamma ray polarization measurement combining high-precision TPCs with MPGD read-out Dark Matter challenges in both axion and WIMP search with new emerging ideas for detection improvements Progress in gaseous and liquid TPCs for rare event detection Georges Charpak opened the meeting with a talk on gaseous detectors for applications in the bio-medical field. He also underlined the importance of new MPGD detectors for both physics and applications. There were about 100 registered participants at the symposium. The successful

  12. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  13. Video quality assessment for web content mirroring

    Science.gov (United States)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  14. Wavelet based automated postural event detection and activity classification with single imu - biomed 2013.

    Science.gov (United States)

    Lockhart, Thurmon E; Soangra, Rahul; Zhang, Jian; Wu, Xuefan

    2013-01-01

    Mobility characteristics associated with activity of daily living such as sitting down, lying down, rising up, and walking are considered to be important in maintaining functional independence and healthy life style especially for the growing elderly population. Characteristics of postural transitions such as sit-to-stand are widely used by clinicians as a physical indicator of health, and walking is used as an important mobility assessment tool. Many tools have been developed to assist in the assessment of functional levels and to detect a person’s activities during daily life. These include questionnaires, observation, diaries, kinetic and kinematic systems, and validated functional tests. These measures are costly and time consuming, rely on subjective patient recall and may not accurately reflect functional ability in the patient’s home. In order to provide a low-cost, objective assessment of functional ability, inertial measurement unit (IMU) using MEMS technology has been employed to ascertain ADLs. These measures facilitate long-term monitoring of activity of daily living using wearable sensors. IMU system are desirable in monitoring human postures since they respond to both frequency and the intensity of movements and measure both dc (gravitational acceleration vector) and ac (acceleration due to body movement) components at a low cost. This has enabled the development of a small, lightweight, portable system that can be worn by a free-living subject without motion impediment – TEMPO (Technology Enabled Medical Precision Observation). Using this IMU system, we acquired indirect measures of biomechanical variables that can be used as an assessment of individual mobility characteristics with accuracy and recognition rates that are comparable to the modern motion capture systems. In this study, five subjects performed various ADLs and mobility measures such as posture transitions and gait characteristics were obtained. We developed postural event detection

  15. Objective detection of long-term slow slip events along the Nankai Trough using GNSS data (1996-2016)

    Science.gov (United States)

    Kobayashi, Akio

    2017-12-01

    This paper presents a method for objective detection of long-term slow slip events with durations on the order of years, on the plate boundary along the Nankai Trough, relying on global navigation satellite system daily coordinate data. The Chugoku region of Japan was held fixed to remove common mode errors, and a displacement component was calculated relative to the direction of plate subduction. Correlations were then calculated between this displacement component and a 3-year ramp function with a 1-year slope. Nearly all periods of strong correlation coincide with periods of previously reported long-term slow slip events. A period of strong correlation around the Kii Channel in 2000-2002 is attributed to a previously undocumented long-term slow slip event beneath the Kii Channel and the eastern part of Shikoku Island with an equivalent moment magnitude of 6.6. This detection method reveals variation among long-term slow slip events along the Nankai Trough.

  16. An integrated video-analysis software system designed for movement detection and sleep analysis. Validation of a tool for the behavioural study of sleep.

    Science.gov (United States)

    Scatena, Michele; Dittoni, Serena; Maviglia, Riccardo; Frusciante, Roberto; Testani, Elisa; Vollono, Catello; Losurdo, Anna; Colicchio, Salvatore; Gnoni, Valentina; Labriola, Claudio; Farina, Benedetto; Pennisi, Mariano Alberto; Della Marca, Giacomo

    2012-02-01

    The aim of the present study was to develop and validate a software tool for the detection of movements during sleep, based on the automated analysis of video recordings. This software is aimed to detect and quantify movements and to evaluate periods of sleep and wake. We applied an open-source software, previously distributed on the web (Zoneminder, ZM), meant for video surveillance. A validation study was performed: computed movement analysis was compared with two standardised, 'gold standard' methods for the analysis of sleep-wake cycles: actigraphy and laboratory-based video-polysomnography. Sleep variables evaluated by ZM were not different from those measured by traditional sleep-scoring systems. Bland-Altman plots showed an overlap between the scores obtained with ZM, PSG and actigraphy, with a slight tendency of ZM to overestimate nocturnal awakenings. ZM showed a good degree of accuracy both with respect to PSG (79.9%) and actigraphy (83.1%); and had very high sensitivity (ZM vs. PSG: 90.4%; ZM vs. actigraphy: 89.5%) and relatively lower specificity (ZM vs. PSG: 42.3%; ZM vs. actigraphy: 65.4%). The computer-assisted motion analysis is reliable and reproducible, and it can allow a reliable esteem of some sleep and wake parameters. The motion-based sleep analysis shows a trend to overestimate wakefulness. The possibility to measure sleep from video recordings may be useful in those clinical and experimental conditions in which traditional PSG studies may not be performed. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  17. An automated cross-correlation based event detection technique and its application to surface passive data set

    Science.gov (United States)

    Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike

    2013-01-01

    In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.

  18. Detection of microsleep events in a car driving simulation study using electrocardiographic features

    Directory of Open Access Journals (Sweden)

    Lenis Gustavo

    2016-09-01

    Full Text Available Microsleep events (MSE are short intrusions of sleep under the demand of sustained attention. They can impose a major threat to safety while driving a car and are considered one of the most significant causes of traffic accidents. Driver’s fatigue and MSE account for up to 20% of all car crashes in Europe and at least 100,000 accidents in the US every year. Unfortunately, there is not a standardized test developed to quantify the degree of vigilance of a driver. To account for this problem, different approaches based on biosignal analysis have been studied in the past. In this paper, we investigate an electrocardiographic-based detection of MSE using morphological and rhythmical features. 14 records from a car driving simulation study with a high incidence of MSE were analyzed and the behavior of the ECG features before and after an MSE in relation to reference baseline values (without drowsiness were investigated. The results show that MSE cannot be detected (or predicted using only the ECG. However, in the presence of MSE, the rhythmical and morphological features were observed to be significantly different than the ones calculated for the reference signal without sleepiness. In particular, when MSE were present, the heart rate diminished while the heart rate variability increased. Time distances between P wave and R peak, and R peak and T wave and their dispersion increased also. This demonstrates a noticeable change of the autonomous regulation of the heart. In future, the ECG parameter could be used as a surrogate measure of fatigue.

  19. On the feasibility of using satellite gravity observations for detecting large-scale solid mass transfer events

    Science.gov (United States)

    Peidou, Athina C.; Fotopoulos, Georgia; Pagiatakis, Spiros

    2017-10-01

    The main focus of this paper is to assess the feasibility of utilizing dedicated satellite gravity missions in order to detect large-scale solid mass transfer events (e.g. landslides). Specifically, a sensitivity analysis of Gravity Recovery and Climate Experiment (GRACE) gravity field solutions in conjunction with simulated case studies is employed to predict gravity changes due to past subaerial and submarine mass transfer events, namely the Agulhas slump in southeastern Africa and the Heart Mountain Landslide in northwestern Wyoming. The detectability of these events is evaluated by taking into account the expected noise level in the GRACE gravity field solutions and simulating their impact on the gravity field through forward modelling of the mass transfer. The spectral content of the estimated gravity changes induced by a simulated large-scale landslide event is estimated for the known spatial resolution of the GRACE observations using wavelet multiresolution analysis. The results indicate that both the Agulhas slump and the Heart Mountain Landslide could have been detected by GRACE, resulting in {\\vert }0.4{\\vert } and {\\vert }0.18{\\vert } mGal change on GRACE solutions, respectively. The suggested methodology is further extended to the case studies of the submarine landslide in Tohoku, Japan, and the Grand Banks landslide in Newfoundland, Canada. The detectability of these events using GRACE solutions is assessed through their impact on the gravity field.

  20. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms.

    Science.gov (United States)

    Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus

    2017-04-01

    Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.

  1. First Satellite-detected Perturbations of Outgoing Longwave Radiation Associated with Blowing Snow Events over Antarctica

    Science.gov (United States)

    Yang, Yuekui; Palm, Stephen P.; Marshak, Alexander; Wu, Dong L.; Yu, Hongbin; Fu, Qiang

    2014-01-01

    We present the first satellite-detected perturbations of the outgoing longwave radiation (OLR) associated with blowing snow events over the Antarctic ice sheet using data from Cloud-Aerosol Lidar with Orthogonal Polarization and Clouds and the Earth's Radiant Energy System. Significant cloud-free OLR differences are observed between the clear and blowing snow sky, with the sign andmagnitude depending on season and time of the day. During nighttime, OLRs are usually larger when blowing snow is present; the average difference in OLRs between without and with blowing snow over the East Antarctic Ice Sheet is about 5.2 W/m2 for the winter months of 2009. During daytime, in contrast, the OLR perturbation is usually smaller or even has the opposite sign. The observed seasonal variations and day-night differences in the OLR perturbation are consistent with theoretical calculations of the influence of blowing snow on OLR. Detailed atmospheric profiles are needed to quantify the radiative effect of blowing snow from the satellite observations.

  2. Snake scales, partial exposure, and the Snake Detection Theory: A human event-related potentials study

    Science.gov (United States)

    Van Strien, Jan W.; Isbell, Lynne A.

    2017-01-01

    Studies of event-related potentials in humans have established larger early posterior negativity (EPN) in response to pictures depicting snakes than to pictures depicting other creatures. Ethological research has recently shown that macaques and wild vervet monkeys respond strongly to partially exposed snake models and scale patterns on the snake skin. Here, we examined whether snake skin patterns and partially exposed snakes elicit a larger EPN in humans. In Task 1, we employed pictures with close-ups of snake skins, lizard skins, and bird plumage. In task 2, we employed pictures of partially exposed snakes, lizards, and birds. Participants watched a random rapid serial visual presentation of these pictures. The EPN was scored as the mean activity (225–300 ms after picture onset) at occipital and parieto-occipital electrodes. Consistent with previous studies, and with the Snake Detection Theory, the EPN was significantly larger for snake skin pictures than for lizard skin and bird plumage pictures, and for lizard skin pictures than for bird plumage pictures. Likewise, the EPN was larger for partially exposed snakes than for partially exposed lizards and birds. The results suggest that the EPN snake effect is partly driven by snake skin scale patterns which are otherwise rare in nature. PMID:28387376

  3. Real-Time Microbiology Laboratory Surveillance System to Detect Abnormal Events and Emerging Infections, Marseille, France.

    Science.gov (United States)

    Abat, Cédric; Chaudet, Hervé; Colson, Philippe; Rolain, Jean-Marc; Raoult, Didier

    2015-08-01

    Infectious diseases are a major threat to humanity, and accurate surveillance is essential. We describe how to implement a laboratory data-based surveillance system in a clinical microbiology laboratory. Two historical Microsoft Excel databases were implemented. The data were then sorted and used to execute the following 2 surveillance systems in Excel: the Bacterial real-time Laboratory-based Surveillance System (BALYSES) for monitoring the number of patients infected with bacterial species isolated at least once in our laboratory during the study periodl and the Marseille Antibiotic Resistance Surveillance System (MARSS), which surveys the primary β-lactam resistance phenotypes for 15 selected bacterial species. The first historical database contained 174,853 identifications of bacteria, and the second contained 12,062 results of antibiotic susceptibility testing. From May 21, 2013, through June 4, 2014, BALYSES and MARSS enabled the detection of 52 abnormal events for 24 bacterial species, leading to 19 official reports. This system is currently being refined and improved.

  4. Fractal analysis of GPS time series for early detection of disastrous seismic events

    Science.gov (United States)

    Filatov, Denis M.; Lyubushin, Alexey A.

    2017-03-01

    A new method of fractal analysis of time series for estimating the chaoticity of behaviour of open stochastic dynamical systems is developed. The method is a modification of the conventional detrended fluctuation analysis (DFA) technique. We start from analysing both methods from the physical point of view and demonstrate the difference between them which results in a higher accuracy of the new method compared to the conventional DFA. Then, applying the developed method to estimate the measure of chaoticity of a real dynamical system - the Earth's crust, we reveal that the latter exhibits two distinct mechanisms of transition to a critical state: while the first mechanism has already been known due to numerous studies of other dynamical systems, the second one is new and has not previously been described. Using GPS time series, we demonstrate efficiency of the developed method in identification of critical states of the Earth's crust. Finally we employ the method to solve a practically important task: we show how the developed measure of chaoticity can be used for early detection of disastrous seismic events and provide a detailed discussion of the numerical results, which are shown to be consistent with outcomes of other researches on the topic.

  5. Security Event Counts Estimate in Automated Systems for Network Attacks Detection

    Directory of Open Access Journals (Sweden)

    D. O. Kovalev

    2011-03-01

    Full Text Available Information security monitoring systems specifics in large automated systems are being analyzed. Security events distribution for different time intervals was determined and further used to estimate the security events counts. Proposed events counts estimate method is based on a dynamically updated table of moments. This method allows to determine the acceptable number of security events at different time intervals as well as exceeding situations which are being the signal for abnormal network activity.

  6. In situ detection of water quality contamination events based on signal complexity analysis using online ultraviolet-visible spectral sensor.

    Science.gov (United States)

    Huang, Pingjie; Wang, Ke; Hou, Dibo; Zhang, Jian; Yu, Jie; Zhang, Guangxin

    2017-08-01

    The contaminant detection in water distribution systems is essential to protect public health from potentially harmful compounds resulting from accidental spills or intentional releases. As a noninvasive optical technique, ultraviolet-visible (UV-Vis) spectroscopy is investigated for detecting contamination events. However, current methods for event detection exhibit the shortcomings of noise susceptibility. In this paper, a new method that has less sensitivity to noise was proposed to detect water quality contamination events by analyzing the complexity of the UV-Vis spectrum series. The proposed method applied approximate entropy (ApEn) to measure spectrum signals' complexity, which made a distinction between normal and abnormal signals. The impact of noise was attenuated with the help of ApEn's insensitivity to signal disturbance. This method was tested on a real water distribution system data set with various concentration simulation events. Results from the experiment and analysis show that the proposed method has a good performance on noise tolerance and provides a better detection result compared with the autoregressive model and sequential probability ratio test.

  7. Automated Sensor Tuning for Seismic Event Detection at a Carbon Capture, Utilization, and Storage Site, Farnsworth Unit, Ochiltree County, Texas

    Science.gov (United States)

    Ziegler, A.; Balch, R. S.; Knox, H. A.; Van Wijk, J. W.; Draelos, T.; Peterson, M. G.

    2016-12-01

    We present results (e.g. seismic detections and STA/LTA detection parameters) from a continuous downhole seismic array in the Farnsworth Field, an oil field in Northern Texas that hosts an ongoing carbon capture, utilization, and storage project. Specifically, we evaluate data from a passive vertical monitoring array consisting of 16 levels of 3-component 15Hz geophones installed in the field and continuously recording since January 2014. This detection database is directly compared to ancillary data (i.e. wellbore pressure) to determine if there is any relationship between seismic observables and CO2 injection and pressure maintenance in the field. Of particular interest is detection of relatively low-amplitude signals constituting long-period long-duration (LPLD) events that may be associated with slow shear-slip analogous to low frequency tectonic tremor. While this category of seismic event provides great insight into dynamic behavior of the pressurized subsurface, it is inherently difficult to detect. To automatically detect seismic events using effective data processing parameters, an automated sensor tuning (AST) algorithm developed by Sandia National Laboratories is being utilized. AST exploits ideas from neuro-dynamic programming (reinforcement learning) to automatically self-tune and determine optimal detection parameter settings. AST adapts in near real-time to changing conditions and automatically self-tune a signal detector to identify (detect) only signals from events of interest, leading to a reduction in the number of missed legitimate event detections and the number of false event detections. Funding for this project is provided by the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL) through the Southwest Regional Partnership on Carbon Sequestration (SWP) under Award No. DE-FC26-05NT42591. Additional support has been provided by site operator Chaparral Energy, L.L.C. and Schlumberger Carbon Services. Sandia National

  8. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    Science.gov (United States)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  9. Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API

    OpenAIRE

    Hosseini, Hossein; Xiao, Baicen; Clark, Andrew; Poovendran, Radha

    2017-01-01

    Due to the growth of video data on Internet, automatic video analysis has gained a lot of attention from academia as well as companies such as Facebook, Twitter and Google. In this paper, we examine the robustness of video analysis algorithms in adversarial settings. Specifically, we propose targeted attacks on two fundamental classes of video analysis algorithms, namely video classification and shot detection. We show that an adversary can subtly manipulate a video in such a way that a human...

  10. Effects of rainfall events on the occurrence and detection efficiency of viruses in river water impacted by combined sewer overflows.

    Science.gov (United States)

    Hata, Akihiko; Katayama, Hiroyuki; Kojima, Keisuke; Sano, Shoichi; Kasuga, Ikuro; Kitajima, Masaaki; Furumai, Hiroaki

    2014-01-15

    Rainfall events can introduce large amount of microbial contaminants including human enteric viruses into surface water by intermittent discharges from combined sewer overflows (CSOs). The present study aimed to investigate the effect of rainfall events on viral loads in surface waters impacted by CSO and the reliability of molecular methods for detection of enteric viruses. The reliability of virus detection in the samples was assessed by using process controls for virus concentration, nucleic acid extraction and reverse transcription (RT)-quantitative PCR (qPCR) steps, which allowed accurate estimation of virus detection efficiencies. Recovery efficiencies of poliovirus in river water samples collected during rainfall events (10%). The log10-transformed virus concentration efficiency was negatively correlated with suspended solid concentration (r(2)=0.86) that increased significantly during rainfall events. Efficiencies of DNA extraction and qPCR steps determined with adenovirus type 5 and a primer sharing control, respectively, were lower in dry weather. However, no clear relationship was observed between organic water quality parameters and efficiencies of these two steps. Observed concentrations of indigenous enteric adenoviruses, GII-noroviruses, enteroviruses, and Aichi viruses increased during rainfall events even though the virus concentration efficiency was presumed to be lower than in dry weather. The present study highlights the importance of using appropriate process controls to evaluate accurately the concentration of water borne enteric viruses in natural waters impacted by wastewater discharge, stormwater, and CSOs. © 2013.

  11. A secure distributed logistic regression protocol for the detection of rare adverse drug events.

    Science.gov (United States)

    El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat

    2013-05-01

    There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through

  12. UKIRT Microlensing Surveys as a Pathfinder for WFIRST: The Detection of Five Highly Extinguished Low-∣b∣ Events

    Science.gov (United States)

    Shvartzvald, Y.; Bryden, G.; Gould, A.; Henderson, C. B.; Howell, S. B.; Beichman, C.

    2017-02-01

    Optical microlensing surveys are restricted from detecting events near the Galactic plane and center, where the event rate is thought to be the highest due to the high optical extinction of these fields. In the near-infrared (NIR), however, the lower extinction leads to a corresponding increase in event detections and is a primary driver for the wavelength coverage of the WFIRST microlensing survey. During the 2015 and 2016 bulge observing seasons, we conducted NIR microlensing surveys with UKIRT in conjunction with and in support of the Spitzer and Kepler microlensing campaigns. Here, we report on five highly extinguished ({A}H=0.81{--}1.97), low-Galactic latitude (-0.98≤slant b≤slant -0.36) microlensing events discovered from our 2016 survey. Four of them were monitored with an hourly cadence by optical surveys but were not reported as discoveries, likely due to the high extinction. Our UKIRT surveys and suggested future NIR surveys enable the first measurement of the microlensing event rate in the NIR. This wavelength regime overlaps with the bandpass of the filter in which the WFIRST microlensing survey will conduct its highest-cadence observations, making this event rate derivation critically important for optimizing its yield.

  13. A robust real-time gait event detection using wireless gyroscope and its application on normal and altered gaits.

    Science.gov (United States)

    Gouwanda, Darwin; Gopalai, Alpha Agape

    2015-02-01

    Gait events detection allows clinicians and biomechanics researchers to determine timing of gait events, to estimate duration of stance phase and swing phase and to segment gait data. It also aids biomedical engineers to improve the design of orthoses and FES (functional electrical stimulation) systems. In recent years, researchers have resorted to using gyroscopes to determine heel-strike (HS) and toe-off (TO) events in gait cycles. However, these methods are subjected to significant delays when implemented in real-time gait monitoring devices, orthoses, and FES systems. Therefore, the work presented in this paper proposes a method that addresses these delays, to ensure real-time gait event detection. The proposed algorithm combines the use of heuristics and zero-crossing method to identify HS and TO. Experiments involving: (1) normal walking; (2) walking with knee brace; and (3) walking with ankle brace for overground walking and treadmill walking were designed to verify and validate the identified HS and TO. The performance of the proposed method was compared against the established gait detection algorithms. It was observed that the proposed method produced detection rate that was comparable to earlier reported methods and recorded reduced time delays, at an average of 100 ms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Signal classification and event reconstruction for acoustic neutrino detection in sea water with KM3NeT

    Science.gov (United States)

    Kießling, Dominik

    2017-03-01

    The research infrastructure KM3NeT will comprise a multi cubic kilometer neutrino telescope that is currently being constructed in the Mediterranean Sea. Modules with optical and acoustic sensors are used in the detector. While the main purpose of the acoustic sensors is the position calibration of the detection units, they can be used as instruments for studies on acoustic neutrino detection, too. In this article, methods for signal classification and event reconstruction for acoustic neutrino detectors will be presented, which were developed using Monte Carlo simulations. For the signal classification the disk-like emission pattern of the acoustic neutrino signal is used. This approach improves the suppression of transient background by several orders of magnitude. Additionally, an event reconstruction is developed based on the signal classification. An overview of these algorithms will be presented and the efficiency of the classification will be discussed. The quality of the event reconstruction will also be presented.

  15. Signal classification and event reconstruction for acoustic neutrino detection in sea water with KM3NeT

    Directory of Open Access Journals (Sweden)

    Kießling Dominik

    2017-01-01

    Full Text Available The research infrastructure KM3NeT will comprise a multi cubic kilometer neutrino telescope that is currently being constructed in the Mediterranean Sea. Modules with optical and acoustic sensors are used in the detector. While the main purpose of the acoustic sensors is the position calibration of the detection units, they can be used as instruments for studies on acoustic neutrino detection, too. In this article, methods for signal classification and event reconstruction for acoustic neutrino detectors will be presented, which were developed using Monte Carlo simulations. For the signal classification the disk–like emission pattern of the acoustic neutrino signal is used. This approach improves the suppression of transient background by several orders of magnitude. Additionally, an event reconstruction is developed based on the signal classification. An overview of these algorithms will be presented and the efficiency of the classification will be discussed. The quality of the event reconstruction will also be presented.

  16. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  17. Top-Down and Bottom-Up Cues Based Moving Object Detection for Varied Background Video Sequences

    Directory of Open Access Journals (Sweden)

    Chirag I. Patel

    2014-01-01

    there is no need for background formulation and updates as it is background independent. Many bottom-up approaches and one combination of bottom-up and top-down approaches are proposed in the present paper. The proposed approaches seem more efficient due to inessential requirement of learning background model and due to being independent of previous video frames. Results indicate that the proposed approach works even against slight movements in the background and in various outdoor conditions.

  18. Unified framework for triaxial accelerometer-based fall event detection and classification using cumulants and hierarchical decision tree classifier.

    Science.gov (United States)

    Kambhampati, Satya Samyukta; Singh, Vishal; Manikandan, M Sabarimalai; Ramkumar, Barathram

    2015-08-01

    In this Letter, the authors present a unified framework for fall event detection and classification using the cumulants extracted from the acceleration (ACC) signals acquired using a single waist-mounted triaxial accelerometer. The main objective of this Letter is to find suitable representative cumulants and classifiers in effectively detecting and classifying different types of fall and non-fall events. It was discovered that the first level of the proposed hierarchical decision tree algorithm implements fall detection using fifth-order cumulants and support vector machine (SVM) classifier. In the second level, the fall event classification algorithm uses the fifth-order cumulants and SVM. Finally, human activity classification is performed using the second-order cumulants and SVM. The detection and classification results are compared with those of the decision tree, naive Bayes, multilayer perceptron and SVM classifiers with different types of time-domain features including the second-, third-, fourth- and fifth-order cumulants and the signal magnitude vector and signal magnitude area. The experimental results demonstrate that the second- and fifth-order cumulant features and SVM classifier can achieve optimal detection and classification rates of above 95%, as well as the lowest false alarm rate of 1.03%.

  19. Automatic detection of esophageal pressure events. Is there an alternative to rule-based criteria?

    DEFF Research Database (Denmark)

    Kruse-Andersen, S; Rütz, K; Kolberg, Jens Godsk

    1995-01-01

    curves generated by muscular contractions, rule-based criteria do not always select the pressure events most relevant for further analysis. We have therefore been searching for a new concept for automatic event recognition. The present study describes a new system, based on the method of neurocomputing...

  20. Detection of adverse events of transfusion in a teaching hospital in Ghana.

    Science.gov (United States)

    Owusu-Ofori,